Re: [systemd-devel] [PATCH] nfs.man: document incompatibility between "bg" option and systemd.

2017-05-29 Thread Niels de Vos
On Tue, May 30, 2017 at 08:19:16AM +1000, NeilBrown wrote:
> 
> Systemd does not, and will not, support "bg" correctly.
> It has other, better, ways to handle "background" mounting.
> 
> Explain this.
> 
> See also https://github.com/systemd/systemd/issues/6046
> 
> Signed-off-by: NeilBrown 
> ---
>  utils/mount/nfs.man | 18 +-
>  1 file changed, 17 insertions(+), 1 deletion(-)
> 
> diff --git a/utils/mount/nfs.man b/utils/mount/nfs.man
> index cc6e992ed807..7e76492d454f 100644
> --- a/utils/mount/nfs.man
> +++ b/utils/mount/nfs.man
> @@ -372,6 +372,21 @@ Alternatively these issues can be addressed
>  using an automounter (refer to
>  .BR automount (8)
>  for details).
> +.IP
> +When
> +.B systemd
> +is used to mount the filesystems listed in
> +.IR /etc/fstab ,
> +the
> +.B bg
> +option is not supported, and may be stripped from the option list.
> +Similar functionality can be achieved by providing the
> +.B x-system.automount
> +option.  This will cause
> +.B systemd
> +to attempt to mount the filesystem when the mountpoint is first
> +accessed, rather than during system boot.  The mount still happens in
> +the "background", though in a different way.
>  .TP 1.5i
>  .BR rdirplus " / " nordirplus
>  Selects whether to use NFS v3 or v4 READDIRPLUS requests.
> @@ -1810,7 +1825,8 @@ such as security negotiation, server referrals, and 
> named attributes.
>  .BR rpc.idmapd (8),
>  .BR rpc.gssd (8),
>  .BR rpc.svcgssd (8),
> -.BR kerberos (1)
> +.BR kerberos (1),
> +.BR systemd.mount (5) .
>  .sp
>  RFC 768 for the UDP specification.
>  .br
> -- 
> 2.12.2
> 

I like this, it makes it so much easier for users to find the better
alternative.

FWIW,
Reviewed-by: Niels de Vos 

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] [PATCH] nfs.man: document incompatibility between "bg" option and systemd.

2017-05-29 Thread NeilBrown

Systemd does not, and will not, support "bg" correctly.
It has other, better, ways to handle "background" mounting.

Explain this.

See also https://github.com/systemd/systemd/issues/6046

Signed-off-by: NeilBrown 
---
 utils/mount/nfs.man | 18 +-
 1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/utils/mount/nfs.man b/utils/mount/nfs.man
index cc6e992ed807..7e76492d454f 100644
--- a/utils/mount/nfs.man
+++ b/utils/mount/nfs.man
@@ -372,6 +372,21 @@ Alternatively these issues can be addressed
 using an automounter (refer to
 .BR automount (8)
 for details).
+.IP
+When
+.B systemd
+is used to mount the filesystems listed in
+.IR /etc/fstab ,
+the
+.B bg
+option is not supported, and may be stripped from the option list.
+Similar functionality can be achieved by providing the
+.B x-system.automount
+option.  This will cause
+.B systemd
+to attempt to mount the filesystem when the mountpoint is first
+accessed, rather than during system boot.  The mount still happens in
+the "background", though in a different way.
 .TP 1.5i
 .BR rdirplus " / " nordirplus
 Selects whether to use NFS v3 or v4 READDIRPLUS requests.
@@ -1810,7 +1825,8 @@ such as security negotiation, server referrals, and named 
attributes.
 .BR rpc.idmapd (8),
 .BR rpc.gssd (8),
 .BR rpc.svcgssd (8),
-.BR kerberos (1)
+.BR kerberos (1),
+.BR systemd.mount (5) .
 .sp
 RFC 768 for the UDP specification.
 .br
-- 
2.12.2



signature.asc
Description: PGP signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd and NFS "bg" mounts.

2017-05-29 Thread NeilBrown
On Mon, May 29 2017, Lennart Poettering wrote:

> On Fri, 26.05.17 12:46, NeilBrown (ne...@suse.com) wrote:
>
>> 
>> Hi all,
>>  it appears that systemd doesn't play well with NFS "bg" mounts.
>>  I can see a few options for how to address this and wonder if anyone
>>  has any opinions.
>
> Yeah, this has come up before. Long story short: "bg" is simply not
> compatible with systemd. We assume that the /bin/mount's effect is
> visible in /proc/self/mountinfo, and everything else is considered a
> bug, i.e. /bin/mount lying to us. And I think that's a pretty rational
> assumption and requirement to make.
>
> I am not particularly convinced the "bg" usecase is really such a
> great idea, since it is necessarily racy: you never know whether mount
> is actually in effect or not, even though /bin/mount claims so. I am
> pretty sure other options (such as autofs mounts, which are dead-easy
> to use in system: just replace the "bg" mount option in fstab by
> "x-systemd.automount") are much better approaches to the problem at
> hand: they also make your local system less dependent on remote
> network access, but they do give proper guarantees about their
> runtime: when the autofs mount is established the path is available.
>
> Hence I am tempted to treat the issue as a documentation and warning
> issue: accept that "bg" is not supported, but document this better. In
> addition, we should probably log about "bg" being broken in the
> fstab-generator. I file a bug about that now:
>
> https://github.com/systemd/systemd/issues/6046

There is a weird distorted sort of justice here.  When NFS first
appeared, it broke various long-standing Unix practices, such as
O_EXCL|O_CREAT for lock files.  Now systemd appears and breaks a
long-standing NFS practice: bg mounts.

I hoped we could find a way to make them work, but I won't cry over
their demise.  I much prefer automount  I think all NFS mounts
should be automounts.

I see this is already documented in systemd.mount:

  The NFS mount option bg for NFS background mounts as documented in nfs(5) is
  not supported in /etc/fstab entries.

I wonder how many people actually read that.
We should probably add symmetric documentation to nfs(5)
Both should give clear pointers to x-systemd.automount.


>
>>  This is better, but the background mount.nfs can persist for a long
>>  time.  I don't think it persists forever, but at least half an hour I
>>  think.
>> 
>>  When the foo.mount unit is stopped, the mount.nfs process isn't
>>  killed.
>
> Hmm, if you see this, then this would be a bug: mount units that are
> stopped should not leave processes around.
>
>>  I don't think this is a major problem, but it is unfortunate and could
>>  be confusing.  During testing I've had multiple mount.nfs background
>>  processes all attached to the one .mount unit.
>
> Humpf, could you file a bug?

https://github.com/systemd/systemd/issues/6048

>
> While I think the "bg" concept is broken, as discussed above, having
> FUSE mounts with processes in the background is actually supported,
> and we should clean them up properly when the mount unit is stopped.
>
> Hmm, maybe mount.nfs isn't properly killable? i.e. systemd tries to
> kill it, but it refuses to be killed?

mount.nfs responds cleanly to SIGTERM.

>
>>  What should we do about bg NFS mounts with systemd?
>>  Some options:
>>- declare "bg" to be not-supported.  If you don't need the filesystem
>>  to be present for boot, then use x-systemd.automount, or some other
>>  automount scheme.
>>  If we did this, we probably need to make it very obvious that "bg"
>>  mounts aren't supported - maybe a log message that appears when
>>  you do "systemctl status ..." ??
>
> I am all for this, as suggested above. I'd only log from
> fstab-generator though. (That said, if we want something stronger, we
> could also add the fact that we stripped "bg" from the mount optoins
> to the DEscription= of the generated mount unit.)

That last bit sounds like a very good idea.  Stripping "bg" could be seen
as a "surprising" thing for fstab-generator to do.  Making it as obvious
as possible to the sys-admin would be a good thing (and would probably
make support personnel happy too).

>
>>- decide that "bg" is really just a lame attempt at automounting, and
>>  that now we have real automounting, "bg" can be translated to that.
>>  So systemd-fstab-generator would treat "bg" like
>>  "x-systemd.automount" and otherwise strip it  from the list of
>>  options.
>
> I am a bit afraid of this I must say. The semantics are different
> enough to likely cause more problems then we'd solve with this. Not
> supporting this at all sounds like the much better approach here:
> let's strip "bg" when specified.
>
>>- do our best to really support "bg".  That means, at least, applying
>>  a much larger timeout to "bg" mounts, and preferably killing any
>>  background processes when a mount unit is stopped.  Below is 

Re: [systemd-devel] [RFC PATCH v3 1/5] ACPI: button: Add indication of BIOS notification and faked events

2017-05-29 Thread Benjamin Tissoires
Hi Lv,

On May 27 2017 or thereabouts, Lv Zheng wrote:
> This patch adds a parameter to acpi_lid_notify_state() so that it can act
> differently against BIOS notification and kernel faked events.
> 
> Cc: 
> Cc: Benjamin Tissoires 
> Cc: Peter Hutterer 
> Signed-off-by: Lv Zheng 
> ---

Answering to this one for the entire series:
last week was a mix of public holidays and PTO from me. I was only
able to review this series today, so sorry for the delay.

I still have a feeling this driver is far too engineered for a simple
input node. There are internal states, defers, mangle of events and too
many kernel parameters.

I still need to get my head around it, but the more I think of it, the
more I think the solution provided by Lennart in
https://github.com/systemd/systemd/issues/2807 is the simplest one:
when we are not sure about the state of the LID switch because _LID
might be wrong, we shouldn't export a LID input node.
Which means that all broken cases would be fixed by just a quirk
"unreliable lid switch".

Give me a day or two to get this in a better shape.

Cheers,
Benjamin

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] socket unit refusing connection when JOB_STOP is pending

2017-05-29 Thread Lennart Poettering
On Tue, 16.05.17 11:28, Moravec, Stanislav (ERT) (stanislav.mora...@hpe.com) 
wrote:

> Hello all,
> 
> I wanted to seek your opinion about correctness of the current behavior
> of socket activated units.
> 
> Let's assume we have socket activated service (for example authd - 
> auth.socket) and 
> some other background service (for the purpose of this test called 
> authtest.service) 
> that needs to connect to the socket service to properly stop itself.
> 
> The authtest defines dependency on auth.socket as expected:
> 
> # cat /usr/lib/systemd/system/authtest.service
> [Unit]
> Description=Test Script to connect auth during shutdown
> After=auth.socket
> Requires=auth.socket
> 
> [Service]
> ExecStart=/bin/true
> ExecStop=/usr/bin/connect_authd
> Type=oneshot
> RemainAfterExit=yes
> 
> [Install]
> WantedBy=multi-user.target
> 
> Yet, authtest doesn't stop correctly (in our test case, the connection just 
> fails,
> not real failure), because auth.socket refuses connections as soon as pending 
> job 
> on auth.socket is JOB_STOP, even if it's not yet time to really stop the 
> unit. 
> 
> The auth.socket:
> May 16 11:23:41 pra0097 systemd[1]: Installed new job auth.socket/stop as 9395
> May 16 11:23:41 pra0097 systemd[1]: Incoming traffic on auth.socket
> May 16 11:23:41 pra0097 systemd[1]: Suppressing connection request on 
> auth.socket since unit stop is scheduled.
> // NOTE the above
> May 16 11:24:44 pra0097 systemd[1]: auth.socket changed listening -> dead
> May 16 11:24:44 pra0097 systemd[1]: Job auth.socket/stop finished, result=done
> May 16 11:24:44 pra0097 systemd[1]: Closed Authd Activation Socket.
> May 16 11:24:44 pra0097 systemd[1]: Stopping Authd Activation Socket.
> 
> The authtest:
> May 16 11:23:41 pra0097 systemd[1]: Installed new job authtest.service/stop 
> as 9337
> May 16 11:23:41 pra0097 systemd[1]: About to execute: /usr/bin/connect_authd
> May 16 11:23:41 pra0097 systemd[1]: Forked /usr/bin/connect_authd as 7051
> May 16 11:23:41 pra0097 systemd[1]: authtest.service changed exited -> stop
> May 16 11:23:41 pra0097 systemd[1]: Stopping Test Script to connect auth 
> during shutdown...
> May 16 11:23:41 pra0097 systemd[7051]: Executing: /usr/bin/connect_authd
> May 16 11:23:41 pra0097 connect_authd[7051]: Tue May 16 11:23:41 CEST 2017
> May 16 11:23:41 pra0097 connect_authd[7051]: COMMAND PID USER   FD   TYPE 
> DEVICE SIZE/OFF NODE NAME
> May 16 11:23:41 pra0097 connect_authd[7051]: systemd   1 root   38u  IPv6  
> 19431  0t0  TCP *:auth (LISTEN)
> May 16 11:23:41 pra0097 connect_authd[7051]: ERROR reading from socket: 
> Connection reset by peer
> May 16 11:23:41 pra0097 connect_authd[7051]: sending message: 80,80
> May 16 11:23:41 pra0097 systemd[1]: Child 7051 belongs to authtest.service
> May 16 11:23:41 pra0097 systemd[1]: authtest.service: control process exited, 
> code=exited status=0
> May 16 11:23:41 pra0097 systemd[1]: authtest.service got final SIGCHLD for 
> state stop
> May 16 11:23:41 pra0097 systemd[1]: authtest.service changed stop -> dead
> May 16 11:23:41 pra0097 systemd[1]: Job authtest.service/stop finished, 
> result=done
> May 16 11:23:41 pra0097 systemd[1]: Stopped Test Script to connect auth 
> during shutdown.
> May 16 11:23:41 pra0097 systemd[1]: authtest.service: cgroup is empty
> 
> 
> The relevant piece of code:
> static void socket_enter_running(Socket *s, int cfd) {
> ...
> /* We don't take connections anymore if we are supposed to shut down 
> anyway */
> if (unit_stop_pending(UNIT(s))) {
> log_unit_debug(UNIT(s), "Suppressing connection request since 
> unit stop is scheduled.");
> ...
> 
> 
> bool unit_stop_pending(Unit *u) {
> ...
> return u->job && u->job->type == JOB_STOP;
> }
> 
> Would not it make sense to still allow connections while the unit is still 
> running? 
> Or maybe for compatibility some boolean could be added to socket unit 
> definition to allow 
> the socket to keep answering connection until it really is stopped.
> 
> If it was not a socket activated unit the 2 services would order and work 
> just fine, 
> so why should socket unit be different?
> 
> Opinions?

This is indeed a shortcoming in systemd's model right now: we don't
permit a start and a stop job to be enqueued for the same unit at the
same time. But to do what you want to do we'd need to permit that: the
service is supposed to stop, but also temporarily start.

I don't really have any nice way out to recommend to you I
fear. Permitting multiple jobs to be enqueued for the same unit would
be a major change in the design of systemd, and would result in a
number of complex problems (i.e. detecting cycles and deadlocks
becomes much more complex).

The best I can offer is to change the design of the services in
question: instead of connecting to the other service only at shutdown,
instead establish the connection when starting up, and leave the
connection around. THis way abnormal exits could be detected as well,
and no 

Re: [systemd-devel] how to correctly specify dependency on dbus

2017-05-29 Thread Lennart Poettering
On Tue, 23.05.17 23:01, prashantkumar dhotre (prashantkumardho...@gmail.com) 
wrote:

> Thanks.
> my service runs during early bootup.
> one intermittent issue I am seeing is that my service fails in
> dbus_bus_get_private() with error:
> 
>  07:45:19 : dbus_bus_get_private() failed with error: Failed to
> connect to socket /var/run/dbus/system_bus_socket: No such file or
> directory
> 
> 
> dbus service started at 07:44:34  and my service started at  07:45:19
> which is 45 sec after dbus and also I see
> /var/run/dbus/system_bus_socket present before this error.

> not sure why dbus API is failing even though the socket is present.
> So am I missing to add any dependency in my service file ?

Maybe you haven#t configured dbus for socket activation, and
dbus-daemon replaces the sockets already created or so?

Also, /var/run is an old name for /run, and should nowadays just be a
symlink. Is it possible that symlink is missing for you?

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] log queries on DHCPServer provided by systemd-networkd

2017-05-29 Thread Lennart Poettering
On Thu, 25.05.17 09:15, Pascal (patate...@gmail.com) wrote:

> hi,
> 
> how to enable DHCPServer log queries ? I can't find any trace of
> conversations in the journal.

You can turn on debug logging in networkd by setting the env var
$SYSTEMD_LOG_LEVEL=debug for systemd-networkd. You can do that
relatively easily by invoking "systemctl edit systemd-network", and
then typing.

[Service]
Environment=SYSTEMD_LOG_LEVEL=debug

and then restarting networkd.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd and NFS "bg" mounts.

2017-05-29 Thread Lennart Poettering
On Fri, 26.05.17 12:46, NeilBrown (ne...@suse.com) wrote:

> 
> Hi all,
>  it appears that systemd doesn't play well with NFS "bg" mounts.
>  I can see a few options for how to address this and wonder if anyone
>  has any opinions.

Yeah, this has come up before. Long story short: "bg" is simply not
compatible with systemd. We assume that the /bin/mount's effect is
visible in /proc/self/mountinfo, and everything else is considered a
bug, i.e. /bin/mount lying to us. And I think that's a pretty rational
assumption and requirement to make.

I am not particularly convinced the "bg" usecase is really such a
great idea, since it is necessarily racy: you never know whether mount
is actually in effect or not, even though /bin/mount claims so. I am
pretty sure other options (such as autofs mounts, which are dead-easy
to use in system: just replace the "bg" mount option in fstab by
"x-systemd.automount") are much better approaches to the problem at
hand: they also make your local system less dependent on remote
network access, but they do give proper guarantees about their
runtime: when the autofs mount is established the path is available.

Hence I am tempted to treat the issue as a documentation and warning
issue: accept that "bg" is not supported, but document this better. In
addition, we should probably log about "bg" being broken in the
fstab-generator. I file a bug about that now:

https://github.com/systemd/systemd/issues/6046

>  This is better, but the background mount.nfs can persist for a long
>  time.  I don't think it persists forever, but at least half an hour I
>  think.
> 
>  When the foo.mount unit is stopped, the mount.nfs process isn't
>  killed.

Hmm, if you see this, then this would be a bug: mount units that are
stopped should not leave processes around.

>  I don't think this is a major problem, but it is unfortunate and could
>  be confusing.  During testing I've had multiple mount.nfs background
>  processes all attached to the one .mount unit.

Humpf, could you file a bug?

While I think the "bg" concept is broken, as discussed above, having
FUSE mounts with processes in the background is actually supported,
and we should clean them up properly when the mount unit is stopped.

Hmm, maybe mount.nfs isn't properly killable? i.e. systemd tries to
kill it, but it refuses to be killed?

>  What should we do about bg NFS mounts with systemd?
>  Some options:
>- declare "bg" to be not-supported.  If you don't need the filesystem
>  to be present for boot, then use x-systemd.automount, or some other
>  automount scheme.
>  If we did this, we probably need to make it very obvious that "bg"
>  mounts aren't supported - maybe a log message that appears when
>  you do "systemctl status ..." ??

I am all for this, as suggested above. I'd only log from
fstab-generator though. (That said, if we want something stronger, we
could also add the fact that we stripped "bg" from the mount optoins
to the DEscription= of the generated mount unit.)

>- decide that "bg" is really just a lame attempt at automounting, and
>  that now we have real automounting, "bg" can be translated to that.
>  So systemd-fstab-generator would treat "bg" like
>  "x-systemd.automount" and otherwise strip it  from the list of
>  options.

I am a bit afraid of this I must say. The semantics are different
enough to likely cause more problems then we'd solve with this. Not
supporting this at all sounds like the much better approach here:
let's strip "bg" when specified.

>- do our best to really support "bg".  That means, at least, applying
>  a much larger timeout to "bg" mounts, and preferably killing any
>  background processes when a mount unit is stopped.  Below is a
>  little patch which does this last bit, but I'm not sure it is generally
>  safe.

As mentioned I think this would just trade one race for a couple of
new ones, and that appears to be a bad idea to me.

>  A side question is: should this knowledge about NFS be encoded in
>  systemd, or should nfs-utils add the necessary knowledge?

I am pretty sure we should keep special understanding of NFS at a
minimum in PID 1, but I think we can be less strict in
fstab-generator, as its primary job is compat with UNIX anyway.

> 
>  i.e. we could add an nfs-fstab-generator to nfs-utils which creates
>  drop-ins to modify the behaviour of the drop-ins provided by
>  systemd-fstab-generator.
>  Adding the TimeoutSec= would be easy.  Stripping the "bg" would be
>  possible.
>  Changing the remote-fs.target.requires/foo.mount symlink to be
>  remote-fs.target.requires/foo.automount would be problematic
>  though.

Well, I'd be fine with letting NFS do its own handling of the NFS
/etc/fstab entries, but I think the special casing of "bg" is fine to
simply add to the existing generator in systemd.

>  Could we teach systemd-fstab-generator to ignore $TYPE filesystems
>  if TYPE-fstab-generator existed?

Well, generators can 

Re: [systemd-devel] Run a separate instance of systemd-networkd in a namespace?

2017-05-29 Thread Lennart Poettering
On Fri, 26.05.17 11:44, Dmitrii Sutiagin (f3fli...@gmail.com) wrote:

> Hi everyone,
> 
> I'm trying to set up a VPN in a namespace, so I could use my base network
> connection as usual and at the same time spawn console or browser in that
> namespace where VPN is running. So far I've sorted out everything except DNS
> resolution. Inside namespace there is no systemd-networkd, so if my
> /etc/resolv.conf does not contain a valid external DNS server then DNS
> inside the namespace does not work. And since VPN tries to dynamically
> update /etc/resolv.conf (and with latest vpnc-script updates - actually
> communicates with systemd-resolved via busctl), I should not hardcode values
> in there. Openconnect inside a namespace is able to (somehow) talk with root
> namespace's systemd-networkd via busctl but systemd-resolved reports that
> "link X is not known", which is probably expected - this link is inside the
> namespace. So my ask is - can I somehow use systemd-resolved with such
> setup? I tried starting a separate process of systemd-resolved inside
> namespace directly and got:
> 
> -
> ...
> Failed to register name: File exists
> Could not create manager: File exists
> -
> 
> Can I somehow change the dbus name used by resolved, and this way I could
> edit vpnc-script to use the modified name..? Looks like it's not possible
> but maybe I overlooked something.
> 
> Please share your thoughts!

Sorry, but this is not supported. Both resolved assume that the IPC
and /run context they run in and the network namespace they run in are
matching. There's no support for mixing and matching them in arbitrary
ways, and it's unlikely this will ever be supported.

I am sorry,

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] pid namespaces, systemd and signals on reboot(2)

2017-05-29 Thread Michał Zegan


W dniu 29.05.2017 o 11:37, Lennart Poettering pisze:
> On Sat, 27.05.17 20:51, Michał Zegan (webczat_...@poczta.onet.pl) wrote:
> 
>> Hello.
>>
>> I came across the following:
>> The manpage reboot(2) says, that inside of a pid namespace, a reboot
>> call that normally would trigger restart actually triggers sending
>> sighup to the init of a namespace, and sigint is sent in case of
>> halt/poweroff.
> 
> That's misleading. This is not what happens. Instead, PID 1's parent
> (i.e. the container manager) will be notified about reboot() being
> called inside the container by a waitid() event of the container's PID
> 1, where the exit cause is reported to be an uncaught SIGHUP.
> 
> It's quite hacky to report it that way, since there's no way to
> distuingish a real SIGHUP exit from a reboot() exit this way, but it's
> how the the kernel decided to do it.
> 
> Or in other words: SIGHUP on the container's PID 1 is how the reboot()
> is *reported*, not what actually happened.
> 
> Lennart
> 
what you just said means a man-pages doc bug and I have reported it
yesterday, thank you.



signature.asc
Description: OpenPGP digital signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Group of temporary but related units?

2017-05-29 Thread Lennart Poettering
On Sun, 28.05.17 17:13, Benno Fünfstück (benno.fuenfstu...@gmail.com) wrote:

> Hey list,
> 
> what would be a good way to manage temporary development environments with
> systemd? For example, if I quickly want to spawn up an environment where my
> service + perhaps some db or a queue or some other services are running. It
> would be nice to reuse systemd's service management capabiltiies for this.
> Or should I really write two sets of unit files for my services, one for
> spinning up a testing / development environment using some other
> supervision suite + another one for deployment with systemd?

That sounds like an option.

You can also use "systemd-run" to transiently run a program as a
service without having to create a unit file for it. It permits you to
put together a service ad-hoc on the command line without modifying
the system.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] About stable network interface names

2017-05-29 Thread Greg KH
On Mon, May 29, 2017 at 11:34:13AM +0200, Cesare Leonardi wrote:
> On 29/05/2017 07:10, Greg KH wrote:
> > Anyway, PCI can, and will sometimes, renumber it's devices on booting
> > again, that's a known issue.  It is rare, but as you have found out,
> > will happen.  So anything depending on PCI numbers will change.  Nothing
> > we can really do about that.
> 
> Do you mean that it could rarely happen on boot also without doing any
> change to the hardware?

Yes it can, I used to have a machine that renumbered the PCI buses every
5th boot or something, was fun for testing PCI changes on :)

> So, to avoid surprises, in case of multiple NICs it's highly recommendable
> anyway to hook interface naming to MAC address, isn't it?

Yes, or something else that you know will be stable in the system.

good luck!

greg k-h
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] About stable network interface names

2017-05-29 Thread Reindl Harald



Am 29.05.2017 um 11:40 schrieb Lennart Poettering:

Well, different naming strategies have different advantages and
disadvantages. If you use the MAC address, then replacing hardware
becomes harder, and you can't cover hardware that doesn't have fixed
MAC addresses (or VMs)

than KVM or whatever you mean with VMs needs to be fixed

on our VMware cluster all nods are Fedora installed 2008 and all are 
configured with MAC addresses in the ifcfg-ethX files, where moved 
between different vCenter versions and hosts and the MAC is stable

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] pid namespaces, systemd and signals on reboot(2)

2017-05-29 Thread Lennart Poettering
On Sat, 27.05.17 20:51, Michał Zegan (webczat_...@poczta.onet.pl) wrote:

> Hello.
> 
> I came across the following:
> The manpage reboot(2) says, that inside of a pid namespace, a reboot
> call that normally would trigger restart actually triggers sending
> sighup to the init of a namespace, and sigint is sent in case of
> halt/poweroff.

That's misleading. This is not what happens. Instead, PID 1's parent
(i.e. the container manager) will be notified about reboot() being
called inside the container by a waitid() event of the container's PID
1, where the exit cause is reported to be an uncaught SIGHUP.

It's quite hacky to report it that way, since there's no way to
distuingish a real SIGHUP exit from a reboot() exit this way, but it's
how the the kernel decided to do it.

Or in other words: SIGHUP on the container's PID 1 is how the reboot()
is *reported*, not what actually happened.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] About stable network interface names

2017-05-29 Thread Lennart Poettering
On Mon, 29.05.17 11:34, Cesare Leonardi (celeo...@gmail.com) wrote:

> On 29/05/2017 07:10, Greg KH wrote:
> > > For example, in one of those tests I initially had this setup:
> > > Integrated NIC: enp9s0
> > > PCIE1 (x1): dual port ethernet card [enp3s0, enp4s0]
> > > PCIE2 (x16): empty
> > > PCIE3 (x1): dual port ethernet card [enp7s0, enp8s0]
> > > 
> > > Then i inserted a SATA controller in the PCIE2 slot and three NICs got
> > > renamed:
> > > Integrated NIC: enp10s0
> > > PCIE1 (x1): dual port ethernet card [enp3s0, enp4s0]
> > > PCIE2 (x16): empty
> > > PCIE3 (x1): dual port ethernet card [enp8s0, enp9s0]
> > 
> > Do you mean to show that PCIE2 is still empty here?
> 
> No, sorry, cut and paste error. In the last case PCIE2 was occupied by the
> SATA controller.
> 
> > Anyway, PCI can, and will sometimes, renumber it's devices on booting
> > again, that's a known issue.  It is rare, but as you have found out,
> > will happen.  So anything depending on PCI numbers will change.  Nothing
> > we can really do about that.
> 
> Do you mean that it could rarely happen on boot also without doing any
> change to the hardware?

Well, the firmware can do whatever it wants at any time, and this is
really up to the firmware. Ideally firmware would keep things strictly
stable, to make this useful, but you know how firmware is...

> So, to avoid surprises, in case of multiple NICs it's highly recommendable
> anyway to hook interface naming to MAC address, isn't it?

Well, different naming strategies have different advantages and
disadvantages. If you use the MAC address, then replacing hardware
becomes harder, and you can't cover hardware that doesn't have fixed
MAC addresses (or VMs).

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] About stable network interface names

2017-05-29 Thread Cesare Leonardi

On 29/05/2017 07:10, Greg KH wrote:

For example, in one of those tests I initially had this setup:
Integrated NIC: enp9s0
PCIE1 (x1): dual port ethernet card [enp3s0, enp4s0]
PCIE2 (x16): empty
PCIE3 (x1): dual port ethernet card [enp7s0, enp8s0]

Then i inserted a SATA controller in the PCIE2 slot and three NICs got
renamed:
Integrated NIC: enp10s0
PCIE1 (x1): dual port ethernet card [enp3s0, enp4s0]
PCIE2 (x16): empty
PCIE3 (x1): dual port ethernet card [enp8s0, enp9s0]


Do you mean to show that PCIE2 is still empty here?


No, sorry, cut and paste error. In the last case PCIE2 was occupied by 
the SATA controller.



Anyway, PCI can, and will sometimes, renumber it's devices on booting
again, that's a known issue.  It is rare, but as you have found out,
will happen.  So anything depending on PCI numbers will change.  Nothing
we can really do about that.


Do you mean that it could rarely happen on boot also without doing any 
change to the hardware?


So, to avoid surprises, in case of multiple NICs it's highly 
recommendable anyway to hook interface naming to MAC address, isn't it?


Thank you for the clarifications, Greg.

Cesare.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Systemd crash when trying to boot Angstrom image with systemd enabled

2017-05-29 Thread Lennart Poettering
On Mon, 29.05.17 02:40, Sonu Abraham (sonu.abra...@rfi.com.au) wrote:

> Hi ,
> 
> I get the following kernel panic when trying to using system-console image 
> from Angstrom distribution on imx28evk board
> 
> [9.506682] UBIFS (ubi0:0): FS size: 236302336 bytes (225 MiB, 1861 LEBs), 
> journal size 9023488 bytes (8 MiB, 72 LEBs)
> [9.517600] UBIFS (ubi0:0): reserved for root: 0 bytes (0 KiB)
> [9.523667] UBIFS (ubi0:0): media format: w4/r0 (latest is w4/r0), UUID 
> 1E931B3C-5F0D-4F1E-B289-76950FFCF30B, small LPT model
> [9.546773] VFS: Mounted root (ubifs filesystem) readonly on device 0:13.
> [9.564543] devtmpfs: mounted
> [9.570181] Freeing unused kernel memory: 296K (c0836000 - c088)
> [9.576590] This architecture does not have kernel memory protection.
> [   10.736655] systemd[1]: System time before build time, advancing clock.
> Mounting cgroup to /sys/fs/cgroup/blkio of type cgroup with options blkio.
> Mounting cgroup to /sys/fs/cgroup/devices of type cgroup with options devices.
> Mounting cgroup to /sys/fs/cgroup/freezer of type cgroup with options freezer.
> Mounting cgroup to /sys/fs/cgroup/pids of type cgroup with options pids.
> Mounting cgroup to /sys/fs/cgroup/debug of type cgroup with options debug.
> Mounting cgroup to /sys/fs/cgroup/cpu,cpuacct of type cgroup with options 
> cpu,cpuacct.
> Mounting cgroup to /sys/fs/cgroup/perf_event of type cgroup with options 
> perf_event.
> Mounting cgroup to /sys/fs/cgroup/memory of type cgroup with options memory.
> Mounting cgroup to /sys/fs/cgroup/cpuset of type cgroup with options cpuset.
> systemd 230 running in system mode. (+PAM -AUDIT -SELINUX +IMA -APPARMOR 
> +SMACK +SYSVINIT +UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL -XZ +LZ4 -SECCOMP 
> +BLKID -ELFUTILS +KMOD +IDN)
> No virtualization found in DMI
> No virtualization found in CPUID
> Virtualization XEN not found, /proc/xen/capabilities does not exist
> No virtualization found in /proc/device-tree/*
> No virtualization found in /proc/cpuinfo.
> This platform does not support /proc/sysinfo
> Found VM virtualization none
> Detected architecture arm.
> 
> Welcome to The Ångström Distribution v2016.12!
> 
> Set hostname to .
> Initializing machine ID from random generator.
> Installed transient /etc/machine-id file.
> [   12.502389] Kernel panic - not syncing: Attempted to kill init! 
> exitcode=0x008b
> [   12.502389]
> [   12.511603] CPU: 0 PID: 1 Comm: systemd Not tainted 4.8.17-fslc+g35ef795 #1
> [   12.518592] Hardware name: Freescale MXS (Device Tree)
> [   12.523819] [] (unwind_backtrace) from [] 
> (show_stack+0x10/0x14)
> [   12.531638] [] (show_stack) from [] (panic+0xbc/0x23c)
> [   12.538588] [] (panic) from [] (do_exit+0xa30/0xa7c)
> [   12.545347] [] (do_exit) from [] 
> (do_group_exit+0x38/0xbc)
> [   12.552634] [] (do_group_exit) from [] 
> (get_signal+0x1ec/0x890)
> [   12.560342] [] (get_signal) from [] 
> (do_signal+0xb4/0x44c)
> [   12.567615] [] (do_signal) from [] 
> (do_work_pending+0xc0/0xd8)
> [   12.575231] [] (do_work_pending) from [] 
> (slow_work_pending+0xc/0x20)
> [   12.583506] ---[ end Kernel panic - not syncing: Attempted to kill init! 
> exitcode=0x008b
> [   12.583506]
> 
> Please let me know if I am missing out any changes in kernel configuration or 
> do i need to pass any specific command line parameters to kernel command line.

The required kernel options are documented in README, please have a
look.

Consider booting in debug mode (just add "debug" to the kernel command
line), to see more information on what is going on.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Systemd -- Using Environment variable in a Unit file from Environment File

2017-05-29 Thread Lennart Poettering
On Fri, 26.05.17 15:06, Raghavendra. H. R (raghuh...@gmail.com) wrote:

> Hi All,
> 
> I'm in the situation where path of my server changes due to version change.
> I don't want to modify my systemd unit file everytime, instead I want to go
> ahead with my environement file for modification.

Please do not use environment files for that. Simply add or remove
unit drop-in files. i.e. if you want to make minor changes to a unit
file foobar.service then add
/etc/systemd/system/foobar.service.d/50-quux.conf or so, and add the
necessary settings there. "systemctl edit" helps you in doing so, and
make this easy.

> EnvironmentFile=/home/raghu/system.env
> *WorkingDirectory=${SERVER_PATH}*

Please read the documentation about EnvironmentFile=, its environment
variables configured what is passed to the executed processes, and may
be referenced in the ExecXYZ= line, but not in any other configuration
lines. THis is because the precise environment is determined at the
moment of activation, and settings generally have a broader definition
than that.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel