Re: [systemd-devel] [PATCH] Remove accelerometer helper

2015-06-29 Thread Tom Gundersen
On Sat, Jun 27, 2015 at 10:02 PM, Kay Sievers k...@vrfy.org wrote:
 On Fri, May 22, 2015 at 3:42 PM, Bastien Nocera had...@hadess.net wrote:
 It's moved to the iio-sensor-proxy D-Bus service.
 ---
  .gitignore |   1 -
  Makefile.am|  15 --
  rules/61-accelerometer.rules   |   3 -
  src/udev/accelerometer/Makefile|   1 -
  src/udev/accelerometer/accelerometer.c | 303 

 https://github.com/systemd/systemd/pull/387

This was now merged, so distros should be aware that they need to pick
up iio-sensor-proxy with the next systemd release.

Cheers,

Tom
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] an enhancement request/suggestion

2015-06-29 Thread lejeczek

dear devel

could we please have journalctl -o cat not loose coloring 
the output feature?


many thanks
P
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Stricter handling of failing mounts during boot under systemd - crap idea !

2015-06-29 Thread jon
https://www.debian.org/releases/stable/amd64/release-notes/ch-information.en.html#systemd-upgrade-default-init-system

I just installed debian 8.1, on the whole my reaction is mixed, one
thing however really pisses me off more than any other

5.6.1. Stricter handling of failing mounts during boot under systemd

This is not Stricter it is a change in default behaviour.  

This change is a shit idea, who do I shout at to get the behaviour
modified to back to sensible ?




___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Stricter handling of failing mounts during boot under systemd - crap idea !

2015-06-29 Thread Jóhann B . Guðmundsson



On 06/29/2015 02:08 PM, jon wrote:

https://www.debian.org/releases/stable/amd64/release-notes/ch-information.en.html#systemd-upgrade-default-init-system

I just installed debian 8.1, on the whole my reaction is mixed, one
thing however really pisses me off more than any other

5.6.1. Stricter handling of failing mounts during boot under systemd

This is not Stricter it is a change in default behaviour.

This change is a shit idea, who do I shout at to get the behaviour
modified to back to sensible ?



The systemd community only recommends what downstream consumers of it 
should do but does not dictate or othewise decided anything how those 
consumers eventually decide to implement systemd so if you dont like how 
systemd is implemented in Debian you should voice your concerns with the 
Debian community.


JBG
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Stricter handling of failing mounts during boot under systemd - crap idea !

2015-06-29 Thread jon
On Mon, 2015-06-29 at 14:21 +, Jóhann B. Guðmundsson wrote:
 
 On 06/29/2015 02:08 PM, jon wrote:
  https://www.debian.org/releases/stable/amd64/release-notes/ch-information.en.html#systemd-upgrade-default-init-system
 
  I just installed debian 8.1, on the whole my reaction is mixed, one
  thing however really pisses me off more than any other
 
  5.6.1. Stricter handling of failing mounts during boot under systemd
 
  This is not Stricter it is a change in default behaviour.
 
  This change is a shit idea, who do I shout at to get the behaviour
  modified to back to sensible ?
 
 
 The systemd community only recommends what downstream consumers of it 
 should do but does not dictate or othewise decided anything how those 
 consumers eventually decide to implement systemd so if you dont like how 
 systemd is implemented in Debian you should voice your concerns with the 
 Debian community.
Ok

Who writes/maintains the code that parses nofail in /etc/fstab ?
Who writes/maintains the typical system boot code (whatever has replaced
rc.sysinit) ?

I suspect the answer to both is the systemd maintainers, in which case
is this not the correct place to bitch about it ?
 
Thanks,
Jon



___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] SysVInit service migration to systemd

2015-06-29 Thread Mantas Mikulėnas
On Jun 29, 2015 16:58, Lesley Kimmel ljkimme...@hotmail.com wrote:

 Jonathan;

 Thanks for the background and information. Since you clearly seem to have
a grasp of systemd please humour me with a few more questions (some of them
slightly ignorant):

 a) Why are PID bad?
 b) Why are lock files bad?
 c) If a/b are so bad why did they persist for so many years in SysVInit?

init itself *doesn't need* either, though it only supports non-forking
services, and in practice the only service it ever manages is agetty. (And
sometimes xdm.)

But the init.d scripts, they're just she'll scripts which aren't capable of
any persistent monitoring. They cannot rely on pid1 to do any monitoring.
So pidfiles and lockfiles are the only (or at least the simplest) way for
these scripts keep state from init.d/foo start across to init.d/foo
stop.

Personally I would say they persisted in sysvinit because they pretty much
/define/ the sysvinit design (i.e. pid1 that does almost nothing, and
short-lived external scripts).

Though there's also openrc, which also runs on top of sysv pid1, but that's
really all I know about it. I wonder how it tracks processes. I heard it
uses cgroups?
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] SysVInit service migration to systemd

2015-06-29 Thread Andrei Borzenkov
В Mon, 29 Jun 2015 08:58:11 -0500
Lesley Kimmel ljkimme...@hotmail.com пишет:

 Jonathan;
 
 Thanks for the background and information. Since you clearly seem to have a 
 grasp of systemd please humour me with a few more questions (some of them 
 slightly ignorant):
 
 a) Why are PID bad?

PID files as the primary way to track running service are too
unreliable - it is easy for a service to die without removing PID file.
PID file as a mean to track service startup the way systemd does it is
IMHO just fine. Actually it is the only way for a service to announce
its readiness without adding explicit systemd support (Type=notify).

 b) Why are lock files bad?
 c) If a/b are so bad why did they persist for so many years in SysVInit?
 d) Generically, how would you prescribe to use systemd to start Java 
 processes (for Java application servers) that are typically started from a 
 set of relatively complex scripts that are used to set up the environment 
 before launching the Java process? It seems that you are advocating to call, 
 as directly as possible, the target service/daemon. However, some things 
 don't seem so straight-forward.
 
 Thanks again!
 Lesley Kimmel, RHCE
 Unix/Linux Systems Engineer
 
  Date: Sun, 28 Jun 2015 14:29:40 +0100
  From: j.deboynepollard-newsgro...@ntlworld.com
  To: systemd-devel@lists.freedesktop.org
  Subject: Re: [systemd-devel] SysVInit service migration to systemd
  
  Lesley Kimmel:
   I've been working with RHEL5/6 for the past several years and have 
   developed many init scripts/services which generally use lock files 
   and PID files to allow for tracking of the service status. We are 
   moving to RHEL7 (systemd) in the near future and I am looking for 
   instruction or tutorials on how to effectively migrate these scripts 
   to work with systemd.   [...] It looks like I may be able to use the 
   'forking' type with the 'pidfile' parameter to somewhat mimic what the 
   scripts to today.
  
  You don't do such migration.  You understand how your daemons are 
  started, and you write service (and other) units to describe that. You 
  do not start with the assumption that you migrate straight from one to 
  the other, particularly if your existing mechanisms involve the 
  dangerous and rickety things (e.g. PID files) that proper service 
  management is designed to avoid in the first place, or things such as 
  subsystem lock files which proper service management has no need of by 
  its very nature.  Type=forking specifies a quite specific readiness 
  protocol that your daemon has to enact, lest it be regarded as failed.  
  It's isn't a catch-all for anything that might fork in any possible 
  way.  And service managers know, by dint of simply remembering, what 
  processes they started and whether they've already started them.
  
  I've been collecting case studies of people who have migrated to systemd 
  exceedingly badly, and constructed some quite horrendous systems, 
  because they've done so without either consideration of, or 
  understanding of, how their sytems actually work.  Here's one candidate 
  that I have yet to add, because I'm hoping that now they've been told 
  that they are going wrong they will correct themselves, whose errors are 
  good to learn from.
  
  There's a computer game called ARK: Survival Evolved.  It's daemon is 
  a program named ShooterGameServer.  To run this program continually as a 
  daemon, someone named Federico Zivolo and a couple of other unidentified 
  people came up with the somewhat bizarre idea of running it under 
  screen, and using screen's commands to send its pseudo-terminal an 
  interrupt character, in order to send a SIGINT to the daemon when it 
  came time to shut it down.  They wrapped this up in a System 5 rc 
  script, taking the conventional start and stop verbs, named 
  arkmanager.  Its prevent-multiple-instances system was not lock files, 
  but grepping the process table.
  
  Wrapped around this they put another System 5 rc script, named 
  arkdaemon, which also took the conventional start and stop verbs, 
  and which treated the wrapper arkmanager script as the daemon, 
  recording the process ID of the shell interpreter for the arkmanager 
  script in a PID file, as if it were the actual daemon's process ID.  It 
  also did various other bad things that proper service managers 
  eliminate, including grepping the process table (again), abusing su to 
  drop privileges, using arbitrary 5 second sleeps to make the timing 
  almost work, and (sic) hardwired ECMA-48 SGR sequences to change the 
  colour of output that isn't going to a terminal in the first place.
  
  Then they wrote a systemd service unit file, arkdeamon.service (sic), 
  that did this:
  
ExecStart=/etc/init.d/arkdaemon start
ExecStop=/etc/init.d/arkdaemon stop
Type=forking
  
  A couple of days ago, I pointed out the errors of even starting down 
  this route to them, and wrote a systemd unit file for them that 

Re: [systemd-devel] Stricter handling of failing mounts during boot under systemd - crap idea !

2015-06-29 Thread Andrei Borzenkov
В Mon, 29 Jun 2015 14:21:44 +
Jóhann B. Guðmundsson johan...@gmail.com пишет:

 
 
 On 06/29/2015 02:08 PM, jon wrote:
  https://www.debian.org/releases/stable/amd64/release-notes/ch-information.en.html#systemd-upgrade-default-init-system
 
  I just installed debian 8.1, on the whole my reaction is mixed, one
  thing however really pisses me off more than any other
 
  5.6.1. Stricter handling of failing mounts during boot under systemd
 
  This is not Stricter it is a change in default behaviour.
 
  This change is a shit idea, who do I shout at to get the behaviour
  modified to back to sensible ?
 
 
 The systemd community only recommends what downstream consumers of it 
 should do

No. systemd changed interpretation of well established configurations
in incompatible way. There is no way to retain existing behavior. It is
far from recommends - it is my way or highway.

The problem is that systemd community does not care to even explain
the reasons for this change. It simply ignores any complaints and bug
reports. Of course systemd community does not get to see much of them
- most users who are pissed off by these changes are not those who
would try to e-mail this list. So to systemd community it looks like
everyone is happy except couple of old farts. 

  but does not dictate or othewise decided anything how those 
 consumers eventually decide to implement systemd so if you dont like how 
 systemd is implemented in Debian you should voice your concerns with the 
 Debian community.

Debian? You assume users of every other distribution are happy?
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Stricter handling of failing mounts during boot under systemd - crap idea !

2015-06-29 Thread Lennart Poettering
On Mon, 29.06.15 15:08, jon (j...@jonshouse.co.uk) wrote:

 https://www.debian.org/releases/stable/amd64/release-notes/ch-information.en.html#systemd-upgrade-default-init-system
 
 I just installed debian 8.1, on the whole my reaction is mixed, one
 thing however really pisses me off more than any other
 
 5.6.1. Stricter handling of failing mounts during boot under systemd
 
 This is not Stricter it is a change in default behaviour.  
 
 This change is a shit idea, who do I shout at to get the behaviour
 modified to back to sensible ?

Here's a hint: it's a really bad idea to introduce yourself to the
systemd community with a mail filled with shit idea, pisses me of,
shout at and claiming the behaviour we implemented wasn't
sensible. It's only a good idea if you try to get moderated.

You can add nofail to your fstab lines to get something that more
resembles the old logic.

But do note that we won't make this default since it creates a race
and is simply insecure in many cases. It's racy since mounting will
then start to race against service being started. And it's insecure
because it might happen that services get access to files and
directories that are normally not accessible due to overmounting.

We cannot allow this race and security problem to be the default, but
you can choose to opt-in to it, by adding nofail to the fstab lines
in question. nofail is actually what you should have placed in
sysvinit too for these cases, but it becomes more relevant with
systemd's exposed behaviour.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Stricter handling of failing mounts during boot under systemd - crap idea !

2015-06-29 Thread Jóhann B . Guðmundsson



On 06/29/2015 04:17 PM, Andrei Borzenkov wrote:

No. systemd changed interpretation of well established configurations
in incompatible way. There is no way to retain existing behavior. It is
far from recommends - it is my way or highway.


Not following which changed interpretation of well established 
configurations systemd has changed here.


Do you have any reliable way to determine which file systems are 
important and which ones can be skipped at bootup?


What do you propose systemd should do if a device listed in fstab is not 
available at bootup and that device has not been flagged that it could 
be skipped?


JBG
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Stricter handling of failing mounts during boot under systemd - crap idea !

2015-06-29 Thread jon
On Mon, 2015-06-29 at 17:54 +0200, Reindl Harald wrote:
 Am 29.06.2015 um 17:01 schrieb jon:
  On Mon, 2015-06-29 at 14:21 +, Jóhann B. Guðmundsson wrote:
 
  On 06/29/2015 02:08 PM, jon wrote:
  https://www.debian.org/releases/stable/amd64/release-notes/ch-information.en.html#systemd-upgrade-default-init-system
 
  I just installed debian 8.1, on the whole my reaction is mixed, one
  thing however really pisses me off more than any other
 
  5.6.1. Stricter handling of failing mounts during boot under systemd
 
  This is not Stricter it is a change in default behaviour.
 
  This change is a shit idea, who do I shout at to get the behaviour
  modified to back to sensible ?
 
 
  The systemd community only recommends what downstream consumers of it
  should do but does not dictate or othewise decided anything how those
  consumers eventually decide to implement systemd so if you dont like how
  systemd is implemented in Debian you should voice your concerns with the
  Debian community.
  Ok
 
  Who writes/maintains the code that parses nofail in /etc/fstab ?
  Who writes/maintains the typical system boot code (whatever has replaced
  rc.sysinit) ?
 
  I suspect the answer to both is the systemd maintainers, in which case
  is this not the correct place to bitch about it?
 
 i don't get what is there to bitch about at all
 
 you have a mountpoint in /etc/fstab and don't care if it don't get 
 mounted at boot and instead data get written into the folder instead the 
 correct filesystem?
You are making assumptions !

Not everyone uses linux in the same way. I have a number of servers that
are RAID, but others I maintain have backup disks instead.

The logic with backup disks is that the volumes are formatted and
installed, then a backup is taken. The backup disk is then removed from
the server and replaced only for backups.

This machine for example is a gen8 HP microserver, it has 4 removable
(non hotswap) disks.
/etc/fstab
LABEL=volpr /disks/volprext4defaults0   0
LABEL=volprbak  /disks/volprbak ext4defaults0   0
LABEL=volpo /disks/volpoext4defaults0   0
LABEL=volpobak  /disks/volpobak ext4defaults0   0

At install it looks like this, but after the machine is populated the
two bak volumes are removed. I want (and expect) them to be mounted
again when replaced, but they spend most of the month in a desk draw.

It is a perfectly valid way of working, does not causes disruption like
making and breaking mirror pairs - and most importantly has been the way
I have worked for 10 plus years !

I have also built numerous embedded devices that have speculative fstab
entries for volumes that are only present sometimes, In the factory for
example.


 normally that is not what somebody expects and if that is the desired 
 behavior for you just say that to your operating system and add nofail
This was, and most importantly IS, the behaviour I expect.

To get this in perspective I am not complaining about the idea of not
going into admin if an FS is missing, it just should not BREAK the
previous behaviour by being the default.  
The default for the past N years has been to continue to boot if the FS
is missing, that should STILL be the default.  

The flag nofail logic is the wrong way up.  An option should have been
added to indicate that the presence of a given FS is expected and
failure to mount should crash out to admin, the install tools should
then have been modified to add this option to fstab rather than changing
the default behaviour.

This would then not break machines as they are updated to include (grits
teeth) systemd.

Thanks,
Jon


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Stricter handling of failing mounts during boot under systemd - crap idea !

2015-06-29 Thread Jóhann B . Guðmundsson



On 06/29/2015 03:01 PM, jon wrote:

On Mon, 2015-06-29 at 14:21 +, Jóhann B. Guðmundsson wrote:

On 06/29/2015 02:08 PM, jon wrote:

https://www.debian.org/releases/stable/amd64/release-notes/ch-information.en.html#systemd-upgrade-default-init-system

I just installed debian 8.1, on the whole my reaction is mixed, one
thing however really pisses me off more than any other

5.6.1. Stricter handling of failing mounts during boot under systemd

This is not Stricter it is a change in default behaviour.

This change is a shit idea, who do I shout at to get the behaviour
modified to back to sensible ?


The systemd community only recommends what downstream consumers of it
should do but does not dictate or othewise decided anything how those
consumers eventually decide to implement systemd so if you dont like how
systemd is implemented in Debian you should voice your concerns with the
Debian community.

Ok

Who writes/maintains the code that parses nofail in /etc/fstab ?
Who writes/maintains the typical system boot code (whatever has replaced
rc.sysinit) ?

I suspect the answer to both is the systemd maintainers, in which case
is this not the correct place to bitch about it ?


util-linux ( see man mount ) is what provides the nofail option and I 
dont follow what you mean by getting the behaviour to modify it back to 
sensible since systemd does already do what is sensible to do and always 
has.


The fact is systemd has no means in figuring out which file systems are 
crucial and which ones are not hence it has always done the safe thing 
and dropped users to the emergency target if device listed in fstab has 
not appeared after a period of time so administrators have had to tell 
systemd which ones they consider to be none crucial to the bootup 
process by adding nofail mount option to the relevant device entry in 
fstab so I'm not sure what the Debian community considers as  stricter 
handling of failing mounts during boot under systemd since this has 
always been the case with systemd.


Perhaps systemd's behaviour is different from one ( or all ) of the 
other initsystem(s) that exist in Debian and that's what this 
documentation entry is about but not about any changes to systemd itself.


My advice to you is simply to add the nofail mount options to the device 
entries in fstab for devices you consider not being crucial to the 
bootup process and systemd will happily carry on your boot process 
without dropping you to the emergency target if that device is not 
available.


JBG
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Stricter handling of failing mounts during boot under systemd - crap idea !

2015-06-29 Thread Lennart Poettering
On Mon, 29.06.15 16:01, jon (j...@jonshouse.co.uk) wrote:

 On Mon, 2015-06-29 at 14:21 +, Jóhann B. Guðmundsson wrote:
  
  On 06/29/2015 02:08 PM, jon wrote:
   https://www.debian.org/releases/stable/amd64/release-notes/ch-information.en.html#systemd-upgrade-default-init-system
  
   I just installed debian 8.1, on the whole my reaction is mixed, one
   thing however really pisses me off more than any other
  
   5.6.1. Stricter handling of failing mounts during boot under systemd
  
   This is not Stricter it is a change in default behaviour.
  
   This change is a shit idea, who do I shout at to get the behaviour
   modified to back to sensible ?
  
  
  The systemd community only recommends what downstream consumers of it 
  should do but does not dictate or othewise decided anything how those 
  consumers eventually decide to implement systemd so if you dont like how 
  systemd is implemented in Debian you should voice your concerns with the 
  Debian community.
 Ok
 
 Who writes/maintains the code that parses nofail in /etc/fstab ?
 Who writes/maintains the typical system boot code (whatever has replaced
 rc.sysinit) ?
 
 I suspect the answer to both is the systemd maintainers, in which case
 is this not the correct place to bitch about it ?

One more mail like this and you will be moderated.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Stricter handling of failing mounts during boot under systemd - crap idea !

2015-06-29 Thread Lennart Poettering
On Mon, 29.06.15 19:17, Andrei Borzenkov (arvidj...@gmail.com) wrote:

  The systemd community only recommends what downstream consumers of it 
  should do
 
 No. systemd changed interpretation of well established configurations
 in incompatible way. There is no way to retain existing behavior. It is
 far from recommends - it is my way or highway.

The old scheme worked by waiting for some weakly defined time until
all block devices appeared first, and then mount them into places and
proceed. But that's not how computers work these days, since devices
can show up at any time basically and busses like USB or iSCSI or
anything else that is basically not soldered-in ATA that you loaded
the kernel from have no strict rules by which time the devices have to
have shown up after you turned on the power.

The old logic was not reliable: if you had anything that was not the
simple ATA case, then you'd run into races, where the mount call would
not find the devices it needed because they hadn't finished probing
yet. And if you had the simple ATA case, then you'd have to wait much
longer for the boot than necessary due to the weakly and weirdly
defined settle time, tht was much longer than necessary.

So: the old mode was borked. It was racy, unsecure, and simply not
compatible with how hardware works. 

systemd is really about dependencies and starting/waiting for exactly
the events that we need to wait for. On one hand that means we don't
wait for longer than necessary, and on the other hand this means we
don't wait for shorter than necessary. The old pre-systemd solution was
in conflict with both of these goals.

And this has nothing to do with my way or the highway, this is
really just about getting basic behaviour right, for something that is
conceptually at the core of what systemd is.

Also, we do offer the opt-in to something resembling the old behaviour
via nofail, but you should use that only acknowledging that the
services will race against the mounts then, and might see parts of the
file system below it, might open files there only to see the directory
overmounted later on.

 The problem is that systemd community does not care to even explain
 the reasons for this change.

This is simply not the truth. This has come up many times before, and
I gave the very same explanation as above every single time.

 It simply ignores any complaints and bug
 reports. 

Wut? we ignore any complaints and bug reports? 

I spend more time on bugzilla these days than on anything else. Good
to know that you appreciate that. 

 Of course systemd community does not get to see much of them
 - most users who are pissed off by these changes are not those who
 would try to e-mail this list. So to systemd community it looks like
 everyone is happy except couple of old farts. 

Oh, wow, I figure you mean me by that? 

Thank you for your very constructive addition to the discussion.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Stricter handling of failing mounts during boot under systemd - crap idea !

2015-06-29 Thread jon

  you have a mountpoint in /etc/fstab and don't care if it don't get
  mounted at boot and instead data get written into the folder instead the
  correct filesystem?
  You are making assumptions !
 
 no
 
  Not everyone uses linux in the same way. I have a number of servers that
  are RAID, but others I maintain have backup disks instead.
 
 and?
 
  The logic with backup disks is that the volumes are formatted and
  installed, then a backup is taken. The backup disk is then removed from
  the server and replaced only for backups.
 
  This machine for example is a gen8 HP microserver, it has 4 removable
  (non hotswap) disks.
  /etc/fstab
  LABEL=volpr /disks/volprext4defaults0   
  0
  LABEL=volprbak  /disks/volprbak ext4defaults0 0
  LABEL=volpo /disks/volpoext4defaults0   
  0
  LABEL=volpobak  /disks/volpobak ext4defaults0 0
 
  At install it looks like this, but after the machine is populated the
  two bak volumes are removed. I want (and expect) them to be mounted
  again when replaced, but they spend most of the month in a desk draw.
 
  It is a perfectly valid way of working, does not causes disruption like
  making and breaking mirror pairs - and most importantly has been the way
  I have worked for 10 plus years !
 
  I have also built numerous embedded devices that have speculative fstab
  entries for volumes that are only present sometimes, In the factory for
  example.
 
 and why don't you just add nofail
 it's way shorter then write a ton of emails
Yes, this I have in fact done.  I have the right to complain though ! 

Also I get pretty fed up with changes to linux.  It gets harder and
harder to maintain from the command line as defaults are constantly
changed and new (sometimes ill thought out) user space tools are added.
PCs may ever grow in memory, but as I age the reverse is true for me.

By the time I next install a machine I will have forgotten that I need
to add an extra fstab option, I will re-boot, and it will bite me all
over again... just because someone thought it was not important to
preserve a more useful behaviour for me  

  normally that is not what somebody expects and if that is the desired
  behavior for you just say that to your operating system and add nofail
  This was, and most importantly IS, the behaviour I expect.
 
 what about change yor expectations?
Why ! Two can play that game.

Unless you are going to have sshd come up before the admin shell then
this simple change is annoying to many people I suspect

None of my low end machines have true remote admin, so if I add an entry
to fsab that is wrong, or even worse an fstab entry that seems to work
now but is for a volume that is offline when the machine reboots, I now
need to :

1) Walk down some stairs
2) Open the rack
3) Plug in a monitor and keyboard
4) Re-boot the machine as some clever hardware designer decided the VGA
display would no longer come up without reading an EDID from the monitor
first.
5) Go into admin shell
# find issue
# fix issue
# reboot

6) wait .
7) See if is now multi user.

Go back to my office, use machine.

I am not writing this to take the p***, this really the type of thing
that people maintaining a few servers on a small scale have to consider.

 there are way bigger changes than the need to be specific in configs
Yes, but most of these are not configs most USERS have written where as
fstab entries are


 
  To get this in perspective I am not complaining about the idea of not
  going into admin if an FS is missing, it just should not BREAK the
  previous behaviour by being the default
 
 why?
1) Because it needlessly breaks/disrupts the way some people work.
2) Because a machine that currently works will break (fail to boot) if
the OS is updated and the fstab is left unmodified.
3) Because by failing to go multiuser the issue must now be fixed
locally adding yet more needless pain that did not exist before.


 
  The default for the past N years has been to continue to boot if the FS
  is missing, that should STILL be the default.
 
 why?
See above.


  The flag nofail logic is the wrong way up.  An option should have been
  added to indicate that the presence of a given FS is expected and
  failure to mount should crash out to admin, the install tools should
  then have been modified to add this option to fstab rather than changing
  the default behaviour.
 
 nonsense, the nofail was *not* invented by systemd, it existed in fact 
 long before systemd
I (and expect many others) go by the observed behaviour.

If nofail pre-dates systend then that is interesting, but unimportant,
as it did not, best I can tell, do anything !.  
I have never seen it in the field, so I suspect that is did nothing at
all until now?


Thanks,
Jon


___
systemd-devel mailing list

[systemd-devel] getent hosts machine

2015-06-29 Thread Johannes Ernst
I was hoping that
getent hosts containername
would work, just like
getent hosts hostname
where hostname can be anything else in the hosts: field in nsswitch.conf. But 
no such luck.

The containername does get resolved correctly in other cases, e.g. when 
pinging it.

Not knowing how getent actually works, I don’t know why that is, but I figured 
I mention it.

Cheers,



Johannes.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Stricter handling of failing mounts during boot under systemd - crap idea !

2015-06-29 Thread jon
On Mon, 2015-06-29 at 20:18 +0200, Lennart Poettering wrote:
 On Mon, 29.06.15 16:19, Jóhann B. Guðmundsson (johan...@gmail.com) wrote:
 
  Who writes/maintains the code that parses nofail in /etc/fstab ?
  Who writes/maintains the typical system boot code (whatever has replaced
  rc.sysinit) ?
  
  I suspect the answer to both is the systemd maintainers, in which case
  is this not the correct place to bitch about it ?
  
  util-linux ( see man mount ) is what provides the nofail option and I dont
  follow what you mean by getting the behaviour to modify it back to sensible
  since systemd does already do what is sensible to do and always has.
 
 Well, that's not the full story. systemd interprets nofail, and builds
 on the semantics that util-linux defines, but expands on them.
 
 Hence, yes, we do take blame for the change of behaviour, but I am
 sure it's the right thing to do.

I am not as sure at all.

It either needs to be less radical, IE preserve the default not going
into admin shell if mount fails or  yikes  ... be more radical,
brings up networking/sshd with an admin console (if sshd is configured,
as it often is) in parallel with local admin console  

Thanks,
Jon



___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Stricter handling of failing mounts during boot under systemd - crap idea !

2015-06-29 Thread Lennart Poettering
On Mon, 29.06.15 18:50, jon (j...@jonshouse.co.uk) wrote:

  and why don't you just add nofail
  it's way shorter then write a ton of emails

 Yes, this I have in fact done.  I have the right to complain though
 ! 

You don't have the right to do this on this mailing list though.

Please do it elsewhere, like on Slashdot or so. The systemd mailing
list is a forum for technical discussions, and rants like yours are
simply not appropriate, they are just noise and not constructive.

 I am not writing this to take the p***, this really the type of thing
 that people maintaining a few servers on a small scale have to consider.

Here's a recommendation for the next time you post a rant like this:
do your research first, and see if there might be technical reasons
for the behaviour you see, instead of assuming there are no reasons
for it except that the auhtors of the software want to be dicks to
you...

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Stricter handling of failing mounts during boot under systemd - crap idea !

2015-06-29 Thread Lennart Poettering
On Mon, 29.06.15 19:20, jon (j...@jonshouse.co.uk) wrote:

 Reversing the logic by adding a mustexist fstab option and keeping the
 default behaviour would fix it.

At this time, systemd has been working this way for 5y now. The
behaviour it implements is also the right behaviour I am sure, and the
nofail switch predates systemd even. Hence I am very sure the
default behaviour should stay the way it is.

 Bringing up networking/sshd in parallel to the admin shell would also
 mitigate my issue

That's a distro decision really. Note though that many networking
implementations as well as sshd are actually not ready to run in
early-boot, like the emergecny mode is. i.e. they assume access to
/var works, use PAM, and so on, which you better avoid if you want to
run in that boot phase.

 I can see that both proposed solutions have issues, but I suspect I am
 not the only one who will not be pleased about this behaviour change.
 
 Changes seem to made with a bias towards desktops or larger data
 centres, but what about the people using discarded PCs and maintaining
 small servers, lots of these floating around smaller organisations. 

As you might know my company cares about containers, big servers
primarily, while I personally run things on a laptop and a smaller
server on the Internet. Hence believe me that I usually care about
laptop setups at least as much as for server setups.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Stricter handling of failing mounts during boot under systemd - crap idea !

2015-06-29 Thread Lennart Poettering
On Mon, 29.06.15 16:19, Jóhann B. Guðmundsson (johan...@gmail.com) wrote:

 Who writes/maintains the code that parses nofail in /etc/fstab ?
 Who writes/maintains the typical system boot code (whatever has replaced
 rc.sysinit) ?
 
 I suspect the answer to both is the systemd maintainers, in which case
 is this not the correct place to bitch about it ?
 
 util-linux ( see man mount ) is what provides the nofail option and I dont
 follow what you mean by getting the behaviour to modify it back to sensible
 since systemd does already do what is sensible to do and always has.

Well, that's not the full story. systemd interprets nofail, and builds
on the semantics that util-linux defines, but expands on them.

Hence, yes, we do take blame for the change of behaviour, but I am
sure it's the right thing to do.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Stricter handling of failing mounts during boot under systemd - crap idea !

2015-06-29 Thread jon
On Mon, 2015-06-29 at 18:45 +0200, Lennart Poettering wrote:
 On Mon, 29.06.15 19:17, Andrei Borzenkov (arvidj...@gmail.com) wrote:
 
   The systemd community only recommends what downstream consumers of it 
   should do
  
  No. systemd changed interpretation of well established configurations
  in incompatible way. There is no way to retain existing behavior. It is
  far from recommends - it is my way or highway.
 
 The old scheme worked by waiting for some weakly defined time until
 all block devices appeared first, and then mount them into places and
 proceed. But that's not how computers work these days, since devices
 can show up at any time basically and busses like USB or iSCSI or
 anything else that is basically not soldered-in ATA that you loaded
 the kernel from have no strict rules by which time the devices have to
 have shown up after you turned on the power.
 
 The old logic was not reliable: if you had anything that was not the
 simple ATA case, then you'd run into races, where the mount call would
 not find the devices it needed because they hadn't finished probing
 yet. And if you had the simple ATA case, then you'd have to wait much
 longer for the boot than necessary due to the weakly and weirdly
 defined settle time, tht was much longer than necessary.
 
 So: the old mode was borked. It was racy, unsecure, and simply not
 compatible with how hardware works. 
 
 systemd is really about dependencies and starting/waiting for exactly
 the events that we need to wait for. On one hand that means we don't
 wait for longer than necessary, and on the other hand this means we
 don't wait for shorter than necessary. The old pre-systemd solution was
 in conflict with both of these goals.
 
 And this has nothing to do with my way or the highway, this is
 really just about getting basic behaviour right, for something that is
 conceptually at the core of what systemd is.
 
 Also, we do offer the opt-in to something resembling the old behaviour
 via nofail, but you should use that only acknowledging that the
 services will race against the mounts then, and might see parts of the
 file system below it, might open files there only to see the directory
 overmounted later on.

Thank you, good explanation, I understand.

Still leaves me with real world problems from the change though.

Reversing the logic by adding a mustexist fstab option and keeping the
default behaviour would fix it.

Bringing up networking/sshd in parallel to the admin shell would also
mitigate my issue

I can see that both proposed solutions have issues, but I suspect I am
not the only one who will not be pleased about this behaviour change.

Changes seem to made with a bias towards desktops or larger data
centres, but what about the people using discarded PCs and maintaining
small servers, lots of these floating around smaller organisations. 

Thanks,
Jon


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] getent hosts machine

2015-06-29 Thread Johannes Ernst
 On Jun 29, 2015, at 10:32, Johannes Ernst johannes.er...@gmail.com wrote:
 
 I was hoping that
   getent hosts containername
 would work, just like
   getent hosts hostname
 where hostname can be anything else in the hosts: field in nsswitch.conf. 
 But no such luck.
 
 The containername does get resolved correctly in other cases, e.g. when 
 pinging it.
 
 Not knowing how getent actually works, I don’t know why that is, but I 
 figured I mention it.

I take it back. A reboot fixed it.

It appears I had a different problem: machinectl (suddenly) stopped showing any 
containers, but ps still showed several systemd-nspawn processes. These 
containers were originally shown with machinectl. The corresponding nics were 
also still there. Wildly speculating, I’d say it might be possible that getent 
stopped working at that time, and “ping” used a cached value.

This may be the same situation as I reported here: 
http://lists.freedesktop.org/archives/systemd-devel/2015-June/033150.html 
http://lists.freedesktop.org/archives/systemd-devel/2015-June/033150.html — 
perhaps the container was still there, it just didn’t show up with machinectl 
any more, which made me believe it was a leftover interface.

(No, I did not touch —register, and note the containers did show up with 
machinectl after they had been started)

I’ll see whether I can reproduce this, after reboot.

Cheers,



Johannes.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Stricter handling of failing mounts during boot under systemd - crap idea !

2015-06-29 Thread Lennart Poettering
On Mon, 29.06.15 20:36, jon (j...@jonshouse.co.uk) wrote:

 What for example is the technical reason why the nofail logic could
 not have been inverted ? I have not heard any technical argument that is
 compelling as to why this could not have worked the other way up and
 default behaviour not have been kept, other than that would require to
 cooperation from the installer authors to add some other flag to
 volumes that must always be available at boot time?

Well, there are two kinds of mounts: some where it is essential that
they are around when the system boots up, and others where it isn't.

A mount on /var is clearly essential, as area pretty much all mounts
below /var, though there might be exceptions. Mounts in /srv are
essential, too. Mounts which are often non-essential are external
media, USB sticks and suchlike. However, those are probably usually
handled via something like udisks, and only in exceptions via
/etc/fstab. That together is already indication that the current
behaviour should be the default when you don't specify something
anything.

Three other reasons are: nofail already exists in util-linux for a
long time, and there is not fail option defined, hence for systemd
too the nofail case is the opt-in and fail is the default. And the
other is: changing behaviour forth and back and forth and back is
just wrong. The behaviour systemd exposes has been this way for 5y
now, and we shouldn't change it without a really string reason -- but
I am pretty sure your specific usecase does not qualify.

Also, again, nofail predates systemd: you should have used it for
your usecase even in sysvinit. If you so will, then the old setup was
already borked for you, even though admittedly the effect was less
fatal.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Stricter handling of failing mounts during boot under systemd - crap idea !

2015-06-29 Thread jon

 A mount on /var is clearly essential, as area pretty much all mounts
 below /var, though there might be exceptions. Mounts in /srv are
 essential, too. Mounts which are often non-essential are external
 media, USB sticks and suchlike. However, those are probably usually
 handled via something like udisks, and only in exceptions via
 /etc/fstab.
Desktop users maybe. Like I said very very rarely use a GUI on a server,
when I do it is often only partially via ssh -X from a machine at a
different physical location.

I stick in a USB device, tail /var/log/messages, do fdisk -l device
then mount the FS by hand . It may not be common anymore, but it is
not wrong !

  That together is already indication that the current
 behaviour should be the default when you don't specify something
 anything.
 
 Three other reasons are: nofail already exists in util-linux for a
 long time, and there is not fail option defined, hence for systemd
 too the nofail case is the opt-in and fail is the default. And the
 other is: changing behaviour forth and back and forth and back is
 just wrong. The behaviour systemd exposes has been this way for 5y
 now, and we shouldn't change it without a really string reason -- but
 I am pretty sure your specific usecase does not qualify.
 
 Also, again, nofail predates systemd: you should have used it for
 your usecase even in sysvinit. If you so will, then the old setup was
 already borked for you, even though admittedly the effect was less
 fatal.
No I don't agree, if something is a warning - then it is ONLY a
warning ! - not borked !

An entry in fstab that does not match a mount is just that, an entry ..
not a failure, or a miss-configuration, it is simply not present ! 
What about a reference to an external USB drive that is off, it is not a
faulty reference, it just not currently the state of the machine. 

To me (and most people) fstab is not what is it what is now and what
may be, it is not real time so it is just a complete description of
what is now and a description of what will be later...

I can place an entry in fstab while I mkfs on the disk referenced, I
often do - I maintain that this is not wrong - it is not mounted !, just
not present yet.

My guess is we are not going to agree but thanks at least for the
replies,

Jon


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Stricter handling of failing mounts during boot under systemd - crap idea !

2015-06-29 Thread Reindl Harald



Am 29.06.2015 um 21:36 schrieb jon:

On Mon, 2015-06-29 at 20:50 +0200, Lennart Poettering wrote:

On Mon, 29.06.15 19:20, jon (j...@jonshouse.co.uk) wrote:


Reversing the logic by adding a mustexist fstab option and keeping the
default behaviour would fix it.


At this time, systemd has been working this way for 5y now. The
behaviour it implements is also the right behaviour I am sure, and the
nofail switch predates systemd even.

I disagree strongly.  As I said the option did not do anything... so
the change only really happened when systemd coded it. Very people are
using systemd, so this change may be stable old code in your world, in
my world it new and its behaviour is wrong !


while i often disagree with systemd developers
*that* behavior is simply correct


  Hence I am very sure the
default behaviour should stay the way it is.

Your default behaviour or mine !

Many people that I know who run linux for real work have been using
systemd for 5 mins, most have yet to discover it at all !


well, it needs much more than 5 minutes to get into a new core part of 
the system and that said: Fedora users are using systemd now since 2011 
me including in production



The first I knew about is when Debian adopted it, I have been using
systemd for a few hours only. It may be your 5 year old pet, but to me
it just a new set of problems to solve.


you can't blame others for that. systemd was available and widely known 
before



I normally install machines with Debian stable,  I am just discovering
systemd for the first time.


and *that* is the problem not a *minor* change, i would understand if 
you have to change /etc/fstab for 5000 machines but even then: large 
setups are maintained not by login everywhere manually and make the same 
change, especially that you can make this change *before* upgrades 
because it don't harm sysvinit systems, the have no problem with nofail



Bringing up networking/sshd in parallel to the admin shell would also
mitigate my issue


That's a distro decision really. Note though that many networking
implementations as well as sshd are actually not ready to run in
early-boot, like the emergecny mode is. i.e. they assume access to
/var works, use PAM, and so on, which you better avoid if you want to
run in that boot phase.


Hmmm ... it used to be possible with telnetd, so I suspect it is still
possible with sshd.


not relieable, the emergency shell is even there when mounting the 
rootfs fails and then you can't bring up most services


but i agree that *trying to bring up network and sshd* would be not a 
bad idea, in case the problem is with a unimportant datadisk it may help



This is the problem with systemd, by changing one small behaviour it
now requires many many changes to get a truly useful system behaviour
back.


honestly it is not normal that mountpoints disappear like in your case 
and even if - a machine which is that important usually has a *tested* 
setup and is reachable via KVM or something similar



As you might know my company cares about containers, big servers
primarily, while I personally run things on a laptop and a smaller
server on the Internet. Hence believe me that I usually care about
laptop setups at least as much as for server setups.

Nope, did not know that, interesting.


you know Redhat?
read recent IT news about Redhat and containers



signature.asc
Description: OpenPGP digital signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Stricter handling of failing mounts during boot under systemd - crap idea !

2015-06-29 Thread Reindl Harald


Am 29.06.2015 um 17:01 schrieb jon:

On Mon, 2015-06-29 at 14:21 +, Jóhann B. Guðmundsson wrote:


On 06/29/2015 02:08 PM, jon wrote:

https://www.debian.org/releases/stable/amd64/release-notes/ch-information.en.html#systemd-upgrade-default-init-system

I just installed debian 8.1, on the whole my reaction is mixed, one
thing however really pisses me off more than any other

5.6.1. Stricter handling of failing mounts during boot under systemd

This is not Stricter it is a change in default behaviour.

This change is a shit idea, who do I shout at to get the behaviour
modified to back to sensible ?



The systemd community only recommends what downstream consumers of it
should do but does not dictate or othewise decided anything how those
consumers eventually decide to implement systemd so if you dont like how
systemd is implemented in Debian you should voice your concerns with the
Debian community.

Ok

Who writes/maintains the code that parses nofail in /etc/fstab ?
Who writes/maintains the typical system boot code (whatever has replaced
rc.sysinit) ?

I suspect the answer to both is the systemd maintainers, in which case
is this not the correct place to bitch about it?


i don't get what is there to bitch about at all

you have a mountpoint in /etc/fstab and don't care if it don't get 
mounted at boot and instead data get written into the folder instead the 
correct filesystem?


normally that is not what somebody expects and if that is the desired 
behavior for you just say that to your operating system and add nofail




signature.asc
Description: OpenPGP digital signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Stricter handling of failing mounts during boot under systemd - crap idea !

2015-06-29 Thread Reindl Harald



Am 29.06.2015 um 18:50 schrieb jon:

On Mon, 2015-06-29 at 17:54 +0200, Reindl Harald wrote:

Am 29.06.2015 um 17:01 schrieb jon:

On Mon, 2015-06-29 at 14:21 +, Jóhann B. Guðmundsson wrote:


On 06/29/2015 02:08 PM, jon wrote:

https://www.debian.org/releases/stable/amd64/release-notes/ch-information.en.html#systemd-upgrade-default-init-system

I just installed debian 8.1, on the whole my reaction is mixed, one
thing however really pisses me off more than any other

5.6.1. Stricter handling of failing mounts during boot under systemd

This is not Stricter it is a change in default behaviour.

This change is a shit idea, who do I shout at to get the behaviour
modified to back to sensible ?



The systemd community only recommends what downstream consumers of it
should do but does not dictate or othewise decided anything how those
consumers eventually decide to implement systemd so if you dont like how
systemd is implemented in Debian you should voice your concerns with the
Debian community.

Ok

Who writes/maintains the code that parses nofail in /etc/fstab ?
Who writes/maintains the typical system boot code (whatever has replaced
rc.sysinit) ?

I suspect the answer to both is the systemd maintainers, in which case
is this not the correct place to bitch about it?


i don't get what is there to bitch about at all

you have a mountpoint in /etc/fstab and don't care if it don't get
mounted at boot and instead data get written into the folder instead the
correct filesystem?

You are making assumptions !


no


Not everyone uses linux in the same way. I have a number of servers that
are RAID, but others I maintain have backup disks instead.


and?


The logic with backup disks is that the volumes are formatted and
installed, then a backup is taken. The backup disk is then removed from
the server and replaced only for backups.

This machine for example is a gen8 HP microserver, it has 4 removable
(non hotswap) disks.
/etc/fstab
LABEL=volpr /disks/volprext4defaults0   0
LABEL=volprbak  /disks/volprbak ext4defaults0   0
LABEL=volpo /disks/volpoext4defaults0   0
LABEL=volpobak  /disks/volpobak ext4defaults0   0

At install it looks like this, but after the machine is populated the
two bak volumes are removed. I want (and expect) them to be mounted
again when replaced, but they spend most of the month in a desk draw.

It is a perfectly valid way of working, does not causes disruption like
making and breaking mirror pairs - and most importantly has been the way
I have worked for 10 plus years !

I have also built numerous embedded devices that have speculative fstab
entries for volumes that are only present sometimes, In the factory for
example.


and why don't you just add nofail
it's way shorter then write a ton of emails


normally that is not what somebody expects and if that is the desired
behavior for you just say that to your operating system and add nofail

This was, and most importantly IS, the behaviour I expect.


what about change yor expectations?
there are way bigger changes than the need to be specific in configs


To get this in perspective I am not complaining about the idea of not
going into admin if an FS is missing, it just should not BREAK the
previous behaviour by being the default


why?


The default for the past N years has been to continue to boot if the FS
is missing, that should STILL be the default.


why?


The flag nofail logic is the wrong way up.  An option should have been
added to indicate that the presence of a given FS is expected and
failure to mount should crash out to admin, the install tools should
then have been modified to add this option to fstab rather than changing
the default behaviour.


nonsense, the nofail was *not* invented by systemd, it existed in fact 
long before systemd





signature.asc
Description: OpenPGP digital signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Stricter handling of failing mounts during boot under systemd - crap idea !

2015-06-29 Thread Lennart Poettering
On Mon, 29.06.15 20:36, jon (j...@jonshouse.co.uk) wrote:

 On Mon, 2015-06-29 at 20:50 +0200, Lennart Poettering wrote:
  On Mon, 29.06.15 19:20, jon (j...@jonshouse.co.uk) wrote:
  
   Reversing the logic by adding a mustexist fstab option and keeping the
   default behaviour would fix it.
  
  At this time, systemd has been working this way for 5y now. The
  behaviour it implements is also the right behaviour I am sure, and the
  nofail switch predates systemd even.

 I disagree strongly.  As I said the option did not do anything... so
 the change only really happened when systemd coded it. Very people are
 using systemd, so this change may be stable old code in your world, in
 my world it new and its behaviour is wrong !

Well, it has been out there for a while, and it has been shipped for
quite some time in commercial distros like RHEL.

I understand that everbody thinks his own usecase is the most relevant
one, the important one we should focus on when designing our
stuff. But actually it's more complex than that. There are tons of
usecases, and when we pick defaults we should come up with something
that covers a good chunk of them nicely, but also fits conceptually
into how we expect systems to work.

Let's just agree to disagree on this issue.

 Hmmm ... it used to be possible with telnetd, so I suspect it is still
 possible with sshd.

Well, bring it up with your distro. We do not maintain sshd nor its
integration intp the distro.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] SysVInit service migration to systemd

2015-06-29 Thread Reindl Harald



Am 29.06.2015 um 15:58 schrieb Lesley Kimmel:

Jonathan;

Thanks for the background and information. Since you clearly seem to
have a grasp of systemd please humour me with a few more questions (some
of them slightly ignorant):

a) Why are PID bad?


what are they good for when the supervisor knows the PID to monitor?


b) Why are lock files bad?


what are they good for when the supervisor knows the state of each service


c) If a/b are so bad why did they persist for so many years in SysVInit?


because SysVInit did have no other way to know the PID or if a service 
is running and things like lockfiles are far away from safe when the 
application crashs and don't remove them - often enough that a restart 
of services failed because of that



d) Generically, how would you prescribe to use systemd to start Java
processes (for Java application servers) that are typically started from
a set of relatively complex scripts that are used to set up the
environment before launching the Java process? It seems that you are
advocating to call, as directly as possible, the target service/daemon.
However, some things don't seem so straight-forward.


most complexity in that scripts is because the way SysVInit worked

a daemon should read each configuration files and not need more than the 
argument of the config file as param




signature.asc
Description: OpenPGP digital signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Errors/warnings when cloning systemd

2015-06-29 Thread Liam R. Howlett
Hello,

Since git 2.3, there have been a number of new fsck options added which
produce issues when I clone the repository
git://anongit.freedesktop.org/systemd/systemd

$ git fsck --full
Checking object directories: 100% (256/256), done.
warning in tag 2a440962bc639c674cef95f5dee6f184c5daf170: invalid format
- expected 'tagger' line
warning in tag 8f0382ac43d4bc4ac7a9f9e0f5429b66f68277e7: invalid format
- expected 'tagger' line
warning in tag 0ee9cce0fb70459dd4b242105189c08dce60c3cd: invalid format
- expected 'tagger' line
warning in tag 3e39ef377f9c16cf171048717d4f5a48f1ee86a9: invalid format
- expected 'tagger' line
warning in tag 2854cc89180bc49418b6da7782af54c9ad52fa92: invalid format
- expected 'tagger' line
warning in tag 36e67de9d076a7016d983a9e646573bd4c34c36a: invalid format
- expected 'tagger' line
warning in tag 10d5c9bbfb83f58282ae8c5b9bb03c89f6f7c1c3: invalid format
- expected 'tagger' line
warning in tag 95c505e822f2a4236de7826a71135a807d0e84e6: invalid format
- expected 'tagger' line
warning in tag 107675ab94583f3545e726665a0942a9418ffacd: invalid format
- expected 'tagger' line
warning in tag 4c53c65012f8f72fbec1704a45c3369b37669854: invalid format
- expected 'tagger' line
warning in tag 6acb15d604d161e6fe93acd3ad2ea5e0b6b8a81b: invalid format
- expected 'tagger' line
warning in tag fe6149affe1ce5e99c89c96fe239585bc6280100: invalid format
- expected 'tagger' line
warning in tag 681c6bad2f70106b88d039975ab4632ea57cf83a: invalid format
- expected 'tagger' line
warning in tag 13d08844788c11915ffe8d2706ec9d6c3a092cdf: invalid format
- expected 'tagger' line
warning in tag 52ad35448f9d092448b0dce8bb6d779e5f7e3451: invalid format
- expected 'tagger' line
warning in tag 2a9bae5e358a1969ff8170d9d1f8e50a2e8451cb: invalid format
- expected 'tagger' line
warning in tag 92ebaca130775bd85ab95127ba7521470ed55ca5: invalid format
- expected 'tagger' line
warning in tag a5c87ab3a38ed39e941f1578043ee4373478daf7: invalid format
- expected 'tagger' line
warning in tag 1c67b7911e5f44575e87a14ddab85cf1530b86fb: invalid format
- expected 'tagger' line
warning in tag 7ee8ac4901dd5677f3127797ab9817c1bd81bb03: invalid format
- expected 'tagger' line
warning in tag d4762f52855dfcfdf00eaf2cd6d5a97dbb44cb6d: invalid format
- expected 'tagger' line
warning in tag 10d443f91077c6993940dea69ff36e3d4eac68dd: invalid format
- expected 'tagger' line
warning in tag 2e0ee0ba69685273e77d258e74985b216cd45d46: invalid format
- expected 'tagger' line
warning in tag 263aba6652ad425ceb5f69fca84d9c9bea3c7e1f: invalid format
- expected 'tagger' line
warning in tag 99cd46af09b682100e7f6dee91ada685804f6634: invalid format
- expected 'tagger' line
warning in tag 87b4318694bab466038400a7cda16d74f0124f9f: invalid format
- expected 'tagger' line
warning in tag 072ab46c604109c113bc910fd6785a3164ec6743: invalid format
- expected 'tagger' line
warning in tag 516966e5f321ee05982d60a22be2db9726067bf8: invalid format
- expected 'tagger' line
warning in tag d33aa6a953b415371604f97ba24aadcecf305801: invalid format
- expected 'tagger' line
warning in tag 3b238e73fb93369347db994b7fbc6e3e94b60b5a: invalid format
- expected 'tagger' line
warning in tag 3f32591eda28dffa93bb251f37ecb23e2715263f: invalid format
- expected 'tagger' line
warning in tag 139d3552462c910ae3fa331242ced496ac86421d: invalid format
- expected 'tagger' line
warning in tag 5bc185871cd320bf96d979cab4d999bb1e155411: invalid format
- expected 'tagger' line
warning in tag 3dd5c69b467fafecb9de71f42b72bed14619d28e: invalid format
- expected 'tagger' line
warning in tag 3c42f818504298b3bcae70023575cc4e4cc31754: invalid format
- expected 'tagger' line
warning in tag 4c899ed27ad6b731bd5f49f9066f97cfc17b9163: invalid format
- expected 'tagger' line
warning in tag b69420c6d92f2122831b90a5175e30d1bc2c03ff: invalid format
- expected 'tagger' line
warning in tag 5ee02a523aeb72890d5d04497f2f0e79f2942611: invalid format
- expected 'tagger' line
warning in tag 646cb8fcd9e3f3fe6a61faedea9a6761d059770b: invalid format
- expected 'tagger' line
warning in tag c279121b842c839f40d0bab6616b6e6e2bc84703: invalid format
- expected 'tagger' line
warning in tag ce6557d6665ad6ceaa79d944d9bcabf9d8c31c76: invalid format
- expected 'tagger' line
warning in tag ee5a0f2f2cc4a287ff9c2afb0e7b5f1cbc385fb0: invalid format
- expected 'tagger' line
warning in tag 9e55ed480114eda23309a4d985c6b918ecc7d362: invalid format
- expected 'tagger' line
warning in tag ef7a61a625c7b29ec4a5bf584dec5683f1039de8: invalid format
- expected 'tagger' line
warning in tag dc85d0583edad13d4811ef013934e82a0b5a9ef1: invalid format
- expected 'tagger' line
warning in tag b0261d4d129f33a7cdd0e1572e6263989fa84075: invalid format
- expected 'tagger' line
warning in tag e089b6575d5ab547f45258f787291f6764935378: invalid format
- expected 'tagger' line
warning in tag a8d43f4368439a28aaecb46136f1004e516c8ce1: invalid format
- expected 'tagger' line
warning in tag 9a6184d0a88d66526ea7730ae8e006d943eaae0e: invalid format
- expected 'tagger' line

Re: [systemd-devel] Stricter handling of failing mounts during boot under systemd - crap idea !

2015-06-29 Thread jon
On Mon, 2015-06-29 at 22:16 +0200, Reindl Harald wrote:
 
 Am 29.06.2015 um 21:36 schrieb jon:
  On Mon, 2015-06-29 at 20:50 +0200, Lennart Poettering wrote:
  On Mon, 29.06.15 19:20, jon (j...@jonshouse.co.uk) wrote:
 
  Reversing the logic by adding a mustexist fstab option and keeping the
  default behaviour would fix it.
 
  At this time, systemd has been working this way for 5y now. The
  behaviour it implements is also the right behaviour I am sure, and the
  nofail switch predates systemd even.
  I disagree strongly.  As I said the option did not do anything... so
  the change only really happened when systemd coded it. Very people are
  using systemd, so this change may be stable old code in your world, in
  my world it new and its behaviour is wrong !
 
 while i often disagree with systemd developers
 *that* behavior is simply correct
Please justify that !   Correct how ?

The system behaviour to date may not have been the correct behaviour,
but the new behaviour is a change, so to my mind it is a change of
default.

The behaviour now may have been the intended behaviour, but that does
not make it in any way correct! 

If I make a device in red for 10 years, then one day make it green, I
cant then say I always intended it to be green and then assume all my
red users will be satisfied with that answer !

 
Hence I am very sure the
  default behaviour should stay the way it is.
  Your default behaviour or mine !
 
  Many people that I know who run linux for real work have been using
  systemd for 5 mins, most have yet to discover it at all !
 
 well, it needs much more than 5 minutes to get into a new core part of 
 the system and that said: Fedora users are using systemd now since 2011 
 me including in production
 
  The first I knew about is when Debian adopted it, I have been using
  systemd for a few hours only. It may be your 5 year old pet, but to me
  it just a new set of problems to solve.
 
 you can't blame others for that. systemd was available and widely known 
 before
Yes Yes Yes.

I get bugger all free time. The time I have is spent building things.
Today I building a Rasperry Pi based jig to re-flash a device with
embedded Wifi.  Next week I will be working on an embedded control
system, week after maybe integrating a device with a web front end. I
simply don't get the free time to try new things just in case some
distro maintainer decides that is a good idea, or fashionable or
whatever justification they use for costing me work ;-)

 
  I normally install machines with Debian stable,  I am just discovering
  systemd for the first time.
 
 and *that* is the problem not a *minor* change, i would understand if 
 you have to change /etc/fstab for 5000 machines but even then: large 
 setups are maintained not by login everywhere manually and make the same 
 change, especially that you can make this change *before* upgrades 
 because it don't harm sysvinit systems, the have no problem with nofail
Yes I do follow you logic, but I am not sure that most distros would not
simply warn unknown mount option bla..

I have done a few roll outs, I also maintained my own distribution for
several embedded products and wrote a linux network installer to
manufacture devices, I do have some experience. 

As a point of interest the fstab in each embedded device contains an
entry for two FS that doe not exist at initial boot.

The installer for that product uses creates a / filesystem with a
complete a pre-configured linux and references two extra filesystems
/configure and /data.  

/configure is created after the machines first run (again in the
factory) and /data is created when the user sets up the machine (a
camera DVR box). /data is also re-created by an mkfs if the user chooses
to reset the box to defaults or replace the internal disk.   

I mention this to explain that referencing FS that may not currently
exist, or existed only for a short time is quite normal in my field. To
my knowledge many products do this internally. I tend therefore to do
similar things with my servers, on the not unreasonable grounds that to
date it has always worked.

  Bringing up networking/sshd in parallel to the admin shell would also
  mitigate my issue
 
  That's a distro decision really. Note though that many networking
  implementations as well as sshd are actually not ready to run in
  early-boot, like the emergecny mode is. i.e. they assume access to
  /var works, use PAM, and so on, which you better avoid if you want to
  run in that boot phase.
 
  Hmmm ... it used to be possible with telnetd, so I suspect it is still
  possible with sshd.
 
 not relieable, the emergency shell is even there when mounting the 
 rootfs fails and then you can't bring up most services
Hm ... emergency shell is correct, ever tried fixing anything from
the initrd util set alone - or just busybox and most of the kernel
modules missing! Most people reach for a re-install or start a full fat
linux from LAN or removable 

Re: [systemd-devel] Question about ExecStartPost= and startup process

2015-06-29 Thread Francis Moreau
On 06/28/2015 07:21 PM, Reindl Harald wrote:
 
 
 Am 28.06.2015 um 19:02 schrieb Francis Moreau:
 On 06/28/2015 01:01 PM, Reindl Harald wrote:

 Am 28.06.2015 um 12:00 schrieb Francis Moreau:
 Hello,

 For services with Type=Forking, I'm wondering if systemd  proceeds
 starting follow-up units when the command described by ExecStart= exits
 or when the one described by ExecStartPost= exits ?

 I tried to read the source code to figure this out and it *seems* that
 the latter is true but I'm really not sure.

 after ExecStartPost because anything else would make no sense, a unit is
 started when *all* if is st started - see the recent mariadb units on
 Fedora - if systemd would fire up deaemons which depend on mariadb the
 whole wait-ready-stuff won't work

 Ok, then the next naive question would be: then what the purpose of the
 ExecStartPost= directive since several commands can already be 'queued'
 with the ExecStart= one?
 
 no, they can't, not for every service type
 When Type is not oneshot, only one command may and must be given
 
 read http://www.freedesktop.org/software/systemd/man/systemd.service.html
 

correct, I was actually confused by what I read from the source code:

service_sigchld_event() {
...
} else if (s-control_pid == pid) {
...
if (s-control_command 
s-control_command-command_next 
f == SERVICE_SUCCESS) {
service_run_next_control(s);
}

this looks like that forking type services could had several commands
queued in ExecStart directive...

Thanks.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Question about ExecStartPost= and startup process

2015-06-29 Thread Andrei Borzenkov
On Mon, Jun 29, 2015 at 11:01 AM, Francis Moreau francis.m...@gmail.com wrote:
 On 06/28/2015 07:21 PM, Reindl Harald wrote:


 Am 28.06.2015 um 19:02 schrieb Francis Moreau:
 On 06/28/2015 01:01 PM, Reindl Harald wrote:

 Am 28.06.2015 um 12:00 schrieb Francis Moreau:
 Hello,

 For services with Type=Forking, I'm wondering if systemd  proceeds
 starting follow-up units when the command described by ExecStart= exits
 or when the one described by ExecStartPost= exits ?

 I tried to read the source code to figure this out and it *seems* that
 the latter is true but I'm really not sure.

 after ExecStartPost because anything else would make no sense, a unit is
 started when *all* if is st started - see the recent mariadb units on
 Fedora - if systemd would fire up deaemons which depend on mariadb the
 whole wait-ready-stuff won't work

 Ok, then the next naive question would be: then what the purpose of the
 ExecStartPost= directive since several commands can already be 'queued'
 with the ExecStart= one?

 no, they can't, not for every service type
 When Type is not oneshot, only one command may and must be given

 read http://www.freedesktop.org/software/systemd/man/systemd.service.html


 correct, I was actually confused by what I read from the source code:

 service_sigchld_event() {
 ...
 } else if (s-control_pid == pid) {
 ...
 if (s-control_command 
 s-control_command-command_next 
 f == SERVICE_SUCCESS) {
 service_run_next_control(s);
 }

 this looks like that forking type services could had several commands
 queued in ExecStart directive...


IIRC this is checked when unit definition is parsed.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] loginctl: add rule for qemu's pci-bridge-seat

2015-06-29 Thread systemd github import bot
Patchset imported to github.
To create a pull request, one of the main developers has to initiate one via:
https://github.com/systemd/systemd/compare/master...systemd-mailing-devs:1435563731-465-1-git-send-email-kraxel%40redhat.com

--
Generated by https://github.com/haraldh/mail2git
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] [PATCH] loginctl: add rule for qemu's pci-bridge-seat

2015-06-29 Thread Gerd Hoffmann
Signed-off-by: Gerd Hoffmann kra...@redhat.com
---
 src/login/71-seat.rules.in | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/src/login/71-seat.rules.in b/src/login/71-seat.rules.in
index ab7b66f..270da71 100644
--- a/src/login/71-seat.rules.in
+++ b/src/login/71-seat.rules.in
@@ -17,6 +17,12 @@ SUBSYSTEM==usb, ATTR{bDeviceClass}==09, TAG+=seat
 # 'Plugable' USB hub, sound, network, graphics adapter
 SUBSYSTEM==usb, ATTR{idVendor}==2230, ATTR{idProduct}==000[13], 
ENV{ID_AUTOSEAT}=1
 
+# qemu (version 2.4+) has a PCI-PCI bridge (-device pci-bridge-seat)
+# to group devices belonging to one seat.
+# see http://git.qemu.org/?p=qemu.git;a=blob;f=docs/multiseat.txt
+SUBSYSTEM==pci, ATTR{vendor}==0x1b36, ATTR{device}==0x000a, \
+   TAG+=seat, ENV{ID_AUTOSEAT}=1
+
 # Mimo 720, with integrated USB hub, displaylink graphics, and e2i
 # touchscreen. This device carries no proper VID/PID in the USB hub,
 # but it does carry good ID data in the graphics component, hence we
-- 
1.8.3.1

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Question about ExecStartPost= and startup process

2015-06-29 Thread Francis Moreau
On 06/28/2015 07:35 PM, Andrei Borzenkov wrote:
 В Sun, 28 Jun 2015 19:02:57 +0200
 Francis Moreau francis.m...@gmail.com пишет:
 
 On 06/28/2015 01:01 PM, Reindl Harald wrote:


 Am 28.06.2015 um 12:00 schrieb Francis Moreau:
 Hello,

 For services with Type=Forking, I'm wondering if systemd  proceeds
 starting follow-up units when the command described by ExecStart= exits
 or when the one described by ExecStartPost= exits ?

 I tried to read the source code to figure this out and it *seems* that
 the latter is true but I'm really not sure.

 after ExecStartPost because anything else would make no sense, a unit is 
 started when *all* if is st started - see the recent mariadb units on 
 Fedora - if systemd would fire up deaemons which depend on mariadb the 
 whole wait-ready-stuff won't work


 Ok, then the next naive question would be: then what the purpose of the
 ExecStartPost= directive since several commands can already be 'queued'
 with the ExecStart= one ?

 
 Long running services (i.e. daemons) are represented by systemd with
 PID of main process (which is *the* service for systemd). With multiple
 ExecStart commands there is no obvious way to know which of started
 processes is the main one. That's the reason to allow just one command
 there. Once main process is known, it is possible to have any number of
 followup ExecStartPost commands.
 

I see, thanks.

So basically for type=Forking service, ExecStart= is used to find out
the PID of the main process, and additionnal commands can be passed
through ExecStartPost= and systemd will wait for the latter to be
finished before starting follow-up units.

Thanks.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] SysVInit service migration to systemd

2015-06-29 Thread Cristian Rodríguez
On Mon, Jun 29, 2015 at 10:58 AM, Lesley Kimmel ljkimme...@hotmail.com wrote:
 Jonathan;

 Thanks for the background and information. Since you clearly seem to have a
 grasp of systemd please humour me with a few more questions (some of them
 slightly ignorant):

 a) Why are PID bad?

Because they pretend to work but they really don't.
This is because only a tiny portion of software implements pid file
creation correctly,
this is in part due to the lack of a FREEBSD-like pidfile_*()
interface that at least tries to be correct.

 b) Why are lock files bad?

Mostly because at least till the *very recent* advent of File-private
POSIX locks (un-POSIX locks)
the OS facilities were terrible.

 c) If a/b are so bad why did they persist for so many years in SysVInit?

Because sysvinit is unable to track processes, in that case you need
at least to know what is the PID of the deamon, in order to be able to
kill it:
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Errors/warnings when cloning systemd

2015-06-29 Thread Mantas Mikulėnas
On Mon, Jun 29, 2015 at 11:21 PM, Liam R. Howlett 
liam.howl...@windriver.com wrote:

 Hello,

 Since git 2.3, there have been a number of new fsck options added which
 produce issues when I clone the repository
 git://anongit.freedesktop.org/systemd/systemd

 $ git fsck --full
 Checking object directories: 100% (256/256), done.
 warning in tag 2a440962bc639c674cef95f5dee6f184c5daf170: invalid format
 - expected 'tagger' line


This is fine, old git versions (until mid-2005) used to create tags without
a 'tagger' field and you can find those everywhere including git.git itself.

error in tag e1ea4e5f1c429fbe62e76fc5b42bee32c2dcd770: unterminated
 header
 error in tag f298b6a712bbe700fe1dbac5b81cdc7dd22be26d: unterminated
 header
 error in tag 4ea98ca6db3b84f5bc16eac8574e5c209ec823ce: unterminated
 header


Also seems to be caused by old-format tags – when there was no message,
git-tag didn't write the separating empty line either.

These new tests are enabled by default when using git fsck.  I have been
 testing with git version 2.4.4.409.g5b1d901 and thought you might want
 to know so the error/warning messages can be corrected.


Not sure if it's worth fixing it, the post-clone fsck does not show such
warnings, only a full fsck does. And existing repos wouldn't fetch the
updated tags.

Though it could be done easily using
https://gist.github.com/c66d9ad2cfd1395a97fa or similar scripts.

Also note that the primary repository is at
https://github.com/systemd/systemd these days.

-- 
Mantas Mikulėnas graw...@gmail.com
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Stricter handling of failing mounts during boot under systemd - crap idea !

2015-06-29 Thread jon
On Mon, 2015-06-29 at 20:40 +0200, Lennart Poettering wrote:
 On Mon, 29.06.15 18:50, jon (j...@jonshouse.co.uk) wrote:
 
   and why don't you just add nofail
   it's way shorter then write a ton of emails
 
  Yes, this I have in fact done.  I have the right to complain though
  ! 
 
 You don't have the right to do this on this mailing list though.
So let me get this correct.
I (a user) does not have the right to complain about systemd on the
systemd developer mailing list, even though my complaint is about a new
behaviour of linux machines caused by systemd  hmmm  I wont go
further along this line as I may be banned. 

 
 Please do it elsewhere, like on Slashdot or so. The systemd mailing
 list is a forum for technical discussions, and rants like yours are
 simply not appropriate, they are just noise and not constructive.
 
  I am not writing this to take the p***, this really the type of thing
  that people maintaining a few servers on a small scale have to consider.
 
 Here's a recommendation for the next time you post a rant like this:
 do your research first, and see if there might be technical reasons
 for the behaviour you see, instead of assuming there are no reasons
 for it except that the auhtors of the software want to be dicks to
 you...

Maybe not dicks to me directly, but maybe dicks for not thinking it
through, not caring that changing default behaviour may have impacts
they have not considered and should be avoided if possible. 

What for example is the technical reason why the nofail logic could
not have been inverted ? I have not heard any technical argument that is
compelling as to why this could not have worked the other way up and
default behaviour not have been kept, other than that would require to
cooperation from the installer authors to add some other flag to
volumes that must always be available at boot time?

Thanks,
Jon


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Stricter handling of failing mounts during boot under systemd - crap idea !

2015-06-29 Thread jon
On Mon, 2015-06-29 at 20:50 +0200, Lennart Poettering wrote:
 On Mon, 29.06.15 19:20, jon (j...@jonshouse.co.uk) wrote:
 
  Reversing the logic by adding a mustexist fstab option and keeping the
  default behaviour would fix it.
 
 At this time, systemd has been working this way for 5y now. The
 behaviour it implements is also the right behaviour I am sure, and the
 nofail switch predates systemd even.
I disagree strongly.  As I said the option did not do anything... so
the change only really happened when systemd coded it. Very people are
using systemd, so this change may be stable old code in your world, in
my world it new and its behaviour is wrong !


  Hence I am very sure the
 default behaviour should stay the way it is.
Your default behaviour or mine !

Many people that I know who run linux for real work have been using
systemd for 5 mins, most have yet to discover it at all !

The first I knew about is when Debian adopted it, I have been using
systemd for a few hours only. It may be your 5 year old pet, but to me
it just a new set of problems to solve.

I normally install machines with Debian stable,  I am just discovering
systemd for the first time.

 
  Bringing up networking/sshd in parallel to the admin shell would also
  mitigate my issue
 
 That's a distro decision really. Note though that many networking
 implementations as well as sshd are actually not ready to run in
 early-boot, like the emergecny mode is. i.e. they assume access to
 /var works, use PAM, and so on, which you better avoid if you want to
 run in that boot phase.

Hmmm ... it used to be possible with telnetd, so I suspect it is still
possible with sshd.

The logic would be to bring networking up (as configured, but no
services).  start sshd (as it is configured) but with say an extra flag
to bring up a new mode 

sshd -adminmode for example.


Then from the client:

$ ssh anyolduser@myfailedmachine

This machine is administration mode and may have failed to fully boot.
Please enter the root password to enter a remote administration shell. 
password:

(admin) #
(admin) # fix it
(admin) # reboot


Requirements:

1) ssh-client can display a messages before login prompt like good old
telnet used to - maybe (shock) just plain text, not sure it can do this
- but hey more code changes are fun, right ... 

2) The new ssh server 'admin' mode would need to be much more stateless
like good old telnetd used to be.

This is the problem with systemd, by changing one small behaviour it
now requires many many changes to get a truly useful system behaviour
back.

  I can see that both proposed solutions have issues, but I suspect I am
  not the only one who will not be pleased about this behaviour change.
  
  Changes seem to made with a bias towards desktops or larger data
  centres, but what about the people using discarded PCs and maintaining
  small servers, lots of these floating around smaller organisations. 
 

 As you might know my company cares about containers, big servers
 primarily, while I personally run things on a laptop and a smaller
 server on the Internet. Hence believe me that I usually care about
 laptop setups at least as much as for server setups.
Nope, did not know that, interesting.

I on the other hand normally admin small servers built from old PC
hardware or fully embedded ARM devices. 





___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel