[systemd-devel] /dev/log tends to block on socket based activation ...

2014-08-06 Thread Hoyer, Marko (ADITG/SW2)
Good morning everyone,

I'm playing around a bit with systemd's socket based activation of 
systemd-journald. My intention is to shift back in time the actual startup of 
systemd-journald.service to save resources (CPU) for early applications during 
startup. The respective socket is activated as usual to not lose any early 
(sys)logs. The actual startup of the service is delayed by adding some 
dependencies to targets (basic.target for instance).

In principal, the idea is working as expected but sometimes the logging via 
syslog(..) blocks applications until the daemon is actually started. 
Depending on how the startup of such application is integrated into the startup 
configuration, this might lead to deadlock situations.

Has anyone here any idea how one can prevent the blocking situation.

Some observations:
- The blocking situation is not happening on each syslog call, sometimes this 
happens after one call, sometimes after a few calls. I wasn't able by now 
isolating the concrete case that leads to a blocking socket
- I doubt that the underlying socket buffer is full
- The call is blocked by the kernel syscall send() that is invoked by the 
syslog() call

Thx in advance for any hints ...


Best regards

Marko Hoyer
Advanced Driver Information Technology GmbH
Software Group II (ADITG/SW2)
Robert-Bosch-Str. 200
31139 Hildesheim
Germany
Tel. +49 5121 49 6948
Fax +49 5121 49 6999
mho...@de.adit-jv.com
ADIT is a joint venture company of Robert Bosch GmbH/Robert Bosch Car 
Multimedia GmbH and DENSO Corporation
Sitz: Hildesheim, Registergericht: Amtsgericht Hildesheim HRB 3438
Geschäftsführung: Wilhelm Grabow, Katsuyoshi Maeda
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Has systemd booted up command

2013-07-18 Thread Hoyer, Marko (ADITG/SW2)
 -Original Message-
 From: systemd-devel-bounces+mhoyer=de.adit-jv@lists.freedesktop.org
 [mailto:systemd-devel-bounces+mhoyer=de.adit-jv@lists.freedesktop.org] On
 Behalf Of Umut Tezduyar
 Sent: Thursday, July 18, 2013 8:38 PM
 To: Lennart Poettering
 Cc: Mailing-List systemd
 Subject: Re: [systemd-devel] Has systemd booted up command
 
 On Thu, Jul 18, 2013 at 7:06 PM, Lennart Poettering lenn...@poettering.net
 wrote:
  On Thu, 18.07.13 10:08, Umut Tezduyar (u...@tezduyar.com) wrote:
 
  Hi,
 
  This is in reference to
  https://bugs.freedesktop.org/show_bug.cgi?id=66926 request.
 
  I have been polling systemd with  systemctl is-active default.target
  to detect if boot up has been completed or not. I have noticed that
  this is not enough though.
 
  It seems like starting a service that is not part of the initial job
  tree can keep in state activating after default.target is reached. I
  could use systemctl list-jobs to detect if there are still jobs
  available but systemctl list-jobs's output is not meant for
  programming.
 
  Same problem happens when I switch targets. Currently I rely on
  systemctl list-jobs output to detect if the target switch has been
  completed or not.
 
  What can we do about it?

Why not simply writing a unit notify_default_target_done.service that
- is linked into multi-user.target.wants
- and defines: After=default.target

Should work  to my understanding ...


Cheers,

Marko
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Impact when not loading ipv6 and autofs kernel module ...

2013-08-07 Thread Hoyer, Marko (ADITG/SW2)
Hello systemd developers,

I found that systemd automatically tries to load ipv6 and autofs kernel 
modules, when they are not compiled in.
Could you give me a hint what is not working, when they are neither provided as 
kernel modules nor compiled in?

In case of autofs I found that automount units are not working any more (I 
would have expected this). Without ipv6 I can't find any problems by now.

So:


1.   Is there any other impact but the described one when the autofs kernel 
feature is missing?

2.   What impact do I have to expect in case the kernel does not provide 
ipv6 functionality?


Thx for any input.

Best regards

Marko Hoyer
Advanced Driver Information Technology GmbH
Software Group II (ADITG/SW2)
Robert-Bosch-Str. 200
31139 Hildesheim
Germany
Tel. +49 5121 49 6948
Fax +49 5121 49 6999
mho...@de.adit-jv.com
ADIT is a joint venture company of Robert Bosch GmbH/Robert Bosch Car 
Multimedia GmbH and DENSO Corporation
Sitz: Hildesheim, Registergericht: Amtsgericht Hildesheim HRB 3438
Geschäftsführung: Wilhelm Grabow, Katsuyoshi Maeda
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] How to delete device units presented in systemd-analyze plot.

2013-08-07 Thread Hoyer, Marko (ADITG/SW2)
Hi Tony,

best to my experiences, I doubt that suppressing the loading of device units 
will speed up systemd that much. There are other major parts that far more 
significantly delay the startup (cgroups in some cases, loading the unit set at 
startup, executing the generators, and finally loading the binary of systemd 
and the shared libraries ...). You'll probably gain more benefit when starting 
to optimize at these ends ...

Best regards

Marko Hoyer
Software Group II (ADITG/SW2)


Best regards

Marko Hoyer
Software Group II (ADITG/SW2)

Tel. +49 5121 49 6948

 -Original Message-
 From: systemd-devel-bounces+mhoyer=de.adit-jv@lists.freedesktop.org
 [mailto:systemd-devel-bounces+mhoyer=de.adit-jv@lists.freedesktop.org] On
 Behalf Of Tony Seo
 Sent: Thursday, August 08, 2013 3:12 AM
 To: Mantas Mikulėnas
 Cc: systemd-devel@lists.freedesktop.org
 Subject: Re: [systemd-devel] How to delete device units presented in systemd-
 analyze plot.
 
 
  I understand what you mean, but I think that I will cut the time for loading
 device unit if I keep those from appearing on my plot.
 
  I want to do a try to temporarily disappear some kinds of device units and
 then I'm supposed to measure boot speed depending a board of mine.(Actually, I
 believe that loading time for device unit affects to boot speed)
 
  What do think that loading time for device units doesn't affect to boot speed
 at machine?
 
  Thanks.
 
 
 
 2013/8/8 Mantas Mikulėnas graw...@gmail.com
 
 
   On Wed, Aug 7, 2013 at 8:03 PM, Tony Seo tonys...@gmail.com wrote:
   
 Hello.
   
Now, I have studied systemd for optimizing systemd on my board.
   
After edited several units, I would like to delete some device
 configuration
units existing on my picture from systemd-analyze(best-dream-
 boy.blogspot)
   
I read the manual pages about systemd.device, I tried to delete
 device units
through editing /etc/fstab and systemctl mask.
   
But I couldn't delete those device units, at least existing units on
 the
picture.
 
 
   You cannot delete these units because they do not exist anywhere
   except in memory – they're only a way to represent device status in
   the format of systemd units. In other words, they do not cause the
   problems you're having; they only make the problems visible in your
   bootchart.
 
   --
   Mantas Mikulėnas graw...@gmail.com
 
 

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Impact when not loading ipv6 and autofs kernel module ...

2013-08-14 Thread Hoyer, Marko (ADITG/SW2)
 -Original Message-
 From: Lennart Poettering [mailto:lenn...@poettering.net]
 Sent: Friday, August 09, 2013 5:49 PM
 To: Hoyer, Marko (ADITG/SW2)
 Cc: systemd-devel@lists.freedesktop.org
 Subject: Re: [systemd-devel] Impact when not loading ipv6 and autofs kernel
 module ...
 
 On Wed, 07.08.13 11:24, Hoyer, Marko (ADITG/SW2) (mho...@de.adit-jv.com)
 wrote:
 
  Hello systemd developers,
 
  I found that systemd automatically tries to load ipv6 and autofs kernel
 modules, when they are not compiled in.
  Could you give me a hint what is not working, when they are neither provided
 as kernel modules nor compiled in?
 
  In case of autofs I found that automount units are not working any more (I
 would have expected this). Without ipv6 I can't find any problems by now.
 
  So:
 
 
  1.   Is there any other impact but the described one when the
  autofs kernel feature is missing?
 
 No. .automount units become unavailable, but that's all.
 
  2.   What impact do I have to expect in case the kernel does not provide
 ipv6 functionality?
 
 Well, support for IPv6 goes away, of course. The reason we explicitly load the
 module at early boot is that we try to configure the loopback device to ::1,
 but that can only work if IPv6 is available in the kernel. Doing this kind of
 loopback configuration of IPv6 however is not hooked up to the kernel-side
 auto-loading of ipv6.ko hence we load it explicitly from usersapce.
 
 Lennart

Thx for the input.

For what is systemd using the loopback so that it has to set it up so early? Is 
there any impact when I'm setting up the loopback later. We are probably in a 
situation where we need ipv4 for general purposes and ipv6 for only special 
ones. So we'd like to set up ipv6 later for performance reasons, which concerns 
especially the module loading.

 
 --
 Lennart Poettering - Red Hat, Inc.


Best regards

Marko Hoyer
Software Group II (ADITG/SW2)

Tel. +49 5121 49 6948
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Need advice on daemon's architecture

2013-11-03 Thread Hoyer, Marko (ADITG/SW2)
 -Original Message-
 From: systemd-devel-boun...@lists.freedesktop.org [mailto:systemd-devel-
 boun...@lists.freedesktop.org] On Behalf Of Colin Guthrie
 Sent: Sunday, November 03, 2013 12:54 PM
 To: Peter Lemenkov; systemd-devel@lists.freedesktop.org
 Subject: Re: [systemd-devel] Need advice on daemon's architecture
 
 'Twas brillig, and Peter Lemenkov at 03/11/13 06:40 did gyre and gimble:
  Hello All!
  I'm working on a system service which uses systemd intensively. Right
  now it's socket-activated, with main service of type simple. I
  recently added support for querying and publishing some internals via
  D-Bus, so it has a D-Bus name now. Does it add anything if I change
  type of a main service to dbus thus allowing systemd to know for
  sure if my service is fully initialized?
 
 
 If you are using systemd intensively, then you may want to use Type=notify.
 
 With type=dbus, systemd will consider things ready when you take the name on
 the bus, but this might not actually be the last thing you do to initialise
 your daemon (although if this is the only interface for clients this is not a
 problem!).
 
 You still might want to use sd_notify() instead. This can also pass through
 some internal performance info to systemd which will show up on systemctl
 status output which is kinda nice.
 
 Col
 
 
 --
 
 Colin Guthrie
 gmane(at)colin.guthr.ie
 http://colin.guthr.ie/
 
 Day Job:
   Tribalogic Limited http://www.tribalogic.net/ Open Source:
   Mageia Contributor http://www.mageia.org/
   PulseAudio Hacker http://www.pulseaudio.org/
   Trac Hacker http://trac.edgewall.org/
 ___
 systemd-devel mailing list
 systemd-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/systemd-devel

Isn't the classical Linux way an option to?
- the daemon does its initialization with the calling thread
- once it is done with the initialization, it forks off a process that goes on 
with the daemons work (the main loop probably)
- the calling thread returns, which signals systemd that the daemon is up now

Type=forking must be defined in the .service to support this architecture.

Are there any drawbacks with this solution?

I'm just asking because I'm working at the moment on a daemon that is going 
exactly this way ... 

Best regards

Marko Hoyer
Software Group II (ADITG/SW2)

Tel. +49 5121 49 6948

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Need advice on daemon's architecture

2013-11-03 Thread Hoyer, Marko (ADITG/SW2)
 -Original Message-
 From: systemd-devel-boun...@lists.freedesktop.org [mailto:systemd-devel-
 boun...@lists.freedesktop.org] On Behalf Of Cristian Rodríguez
 Sent: Sunday, November 03, 2013 3:25 PM
 To: systemd-devel@lists.freedesktop.org
 Subject: Re: [systemd-devel] Need advice on daemon's architecture
 
 El 03/11/13 10:42, Hoyer, Marko (ADITG/SW2) escribió:
 
  Isn't the classical Linux way an option to?
  - the daemon does its initialization with the calling thread
  - once it is done with the initialization, it forks off a process that
  goes on with the daemons work (the main loop probably)
  - the calling thread returns, which signals systemd that the daemon is
  up now
 
  Type=forking must be defined in the .service to support this architecture.
 
  Are there any drawbacks with this solution?
 
 Yes, having reviewed dozens of daemons for migration to systemd, I can assure
 yours will also be missing something of the required initialization sequence
 (see daemon(7) ) or doing it wrong, as the number of daemons that do this
 routines correctly is almost non-existent.
 
 For new daemons, please use Type=notify.
 
 
 
 
 
 --
 Judging by their response, the meanest thing you can do to people on the
 Internet is to give them really good software for free. - Anil Dash
 ___
 systemd-devel mailing list
 systemd-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Thx for the fast feedback. Good hint with the man page, I'll have a more 
detailed look on the page. I think when you need to stay a bit independent from 
systemd and don't have a dbus interface which can be used for synchronization 
there is probably no other way then the classical one.

But in case I'm starting in such a well prepared environment like the one 
provided by a systemd service, I hopefully will not run into any troubles even 
if my daemon is missing something of the required initialization sequence or 
doing it wrong ;)

Best regards

Marko Hoyer
Software Group II (ADITG/SW2)

Tel. +49 5121 49 6948
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Need advice on daemon's architecture

2013-11-04 Thread Hoyer, Marko (ADITG/SW2)
 -Original Message-
 From: Lennart Poettering [mailto:lenn...@poettering.net]
 Sent: Monday, November 04, 2013 3:42 PM
 To: Hoyer, Marko (ADITG/SW2)
 Cc: Colin Guthrie; Peter Lemenkov; systemd-devel@lists.freedesktop.org
 Subject: Re: [systemd-devel] Need advice on daemon's architecture
 
 On Sun, 03.11.13 13:42, Hoyer, Marko (ADITG/SW2) (mho...@de.adit-jv.com)
 wrote:
 
   If you are using systemd intensively, then you may want to use
 Type=notify.
  
   With type=dbus, systemd will consider things ready when you take the
   name on the bus, but this might not actually be the last thing you
   do to initialise your daemon (although if this is the only interface
   for clients this is not a problem!).
  
   You still might want to use sd_notify() instead. This can also pass
   through some internal performance info to systemd which will show up
   on systemctl status output which is kinda nice.
  
   Col
  
  
 
  Isn't the classical Linux way an option too?
 
 Well, it is, but it is hard to get right and a lot of redundant code involved.
 
  - the daemon does its initialization with the calling thread
  - once it is done with the initialization, it forks off a process that
goes on with the daemons work (the main loop probably)
 
 No, this is wrong. You really shouldn't do initialization in one process and
 then run the daemon from a forked off second one. A lot of (library) code is
 not happy with being initialized in one process and being used in another
 forked off one. For example, any code involving threads is generally allergic
 to this, since background threads cease to exist in the forked off child. This
 is nearly impossible to work around [1]. Or code involving file locks or even
 a lot of socket code doesn't like it either. In general you should not assume
 that any library can handle it, except for those cases where the library
 author explicitly tells you that it is safe.
 
 Actually, all systemd libraries (like libsystemd-journal or
 libsystem-bus) and a couple of my other libraries (like libcanberra)
 explicitly check for the PID changing and refuse operation in such cases,
 simply because the effects of fork()ing are so nasty. Or to be more explicit:
 if you initialize a sd_journal or sd_bus object in one process and then
 try to execute any operation on it in a forked off child process you will get
 -ECHILD returned which is how we indicate this misuage.
 
 So, what is the right way to do this? Fork off early, do the initialization in
 the child, and signal the parent that you are done via a pipe, so that the
 parent exits only after the child is done. This is explained in daemon(7).
 
 Or even better: don't bother at all, write your services without forking, and
 use Type=dbus or Type=notify instead.
 
 Lennart
 
 [1] Yeah, and don't tell me pthread_atfork() is the solution for
 this. It's not. It makes things even worse, since there's no defined
 execution order defined for it. If you claim pthread_atfork() was a
 solution and not a problem in itself, then you obviously have never
 used it, or only in trivial programs.
 
 --
 Lennart Poettering, Red Hat


Thx for the comprehensive answer.

My daemon is quite simple, threads are only used as kind of workers which are 
started from the daemon process out of the main loop. So I guess: no problem at 
this end.

I'm additionally polling a udevd socket opened via libudev, which actually is 
initialized from the calling process by now. Additionally, I'm working on the 
connection to a control tool, which I plan to realize via sockets as well. So I 
plan to open a listener socket during the initialization.

Maybe I'll actually switch to the sd_notify() way until someone appears who is 
not systemd and wants to use my daemon ...

Best regards

Marko Hoyer
Software Group II (ADITG/SW2)

Tel. +49 5121 49 6948


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Question regarding the NotifyAccess parameter

2013-11-26 Thread Hoyer, Marko (ADITG/SW2)
 One more issue I observed is - if I specify Restart=on-failure, if
  watchdog timer expire, it restart the service. But I can see that it
  create two processes rather than restarting the process. But if I do
  systemctl restart Myservice , it kills the previous instance of
  service and start a new service. Any pointers on why it happens so.

This part has been already reported as a bug in May:
http://lists.freedesktop.org/archives/systemd-devel/2013-May/011030.html

Best to my knowledge, this has been fixed in systemd 203, 204, or 205 ...
Please note that the link above does not contain the final bug fix. Some 
discussions followed which led to the final solution at a certain point. Follow 
the threads, you'll find it ...


Best regards

Marko Hoyer
Software Group II (ADITG/SW2)

Tel. +49 5121 49 6948
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Dynamic priorities for service loading using systemd ...

2012-09-21 Thread Hoyer, Marko (ADITG/SW2)
Hi all,

hope that is the right forum to raise my question.

I'm trying to realize a kind of dynamic mandatory / lazy service scenario using 
systemd.

This means in details that services are either mandatory or lazy. Mandatory 
services are started first, once all man. services have been loaded, the lazy 
ones can be started. It should not occur that a lazy service starts before the 
last mandatory one has been started.

For a static scenario I would define a mandatory target.  For all lazy services 
I would add a Requires and After dependency to this target.

But in my case, I need a more dynamic scenario. The assignment of services to 
mandatory or lazy is not fixed. It can be changed while the system is running 
or in worst case early during boot up. To my understanding I must automatically 
adapt the .services files of the services to realize such a scenario which 
looks a bit complicated to me.

Does anyone know another prettier solution for that.

Thanks in advance for help!!


Best regards

Marko Hoyer
Advanced Driver Information Technology GmbH
Software Group II (ADITG/SW1)
Robert-Bosch-Str. 200
31139 Hildesheim
Germany
Tel. +49 5121 49 6948
Fax +49 5121 49 6999
mho...@de.adit-jv.com
ADIT is a joint venture company of Robert Bosch GmbH/Robert Bosch Car 
Multimedia GmbH and DENSO Corporation
Sitz: Hildesheim, Registergericht: Amtsgericht Hildesheim HRB 3438
Geschäftsführung: Wilhelm Grabow, Katsuyoshi Maeda
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] cdrom_id opens device with O_EXCL, why?

2014-09-18 Thread Hoyer, Marko (ADITG/SW2)
Hello together,

I recently stumbled over cdrom_id opening the device with the O_EXCL flag set, 
if it is not currently mounted:

fd = open(node, O_RDONLY|O_NONBLOCK|(is_mounted(node) ? 0 : O_EXCL));

The effect of this is that automatically mounting a cdrom sometimes results in 
resource busy, if change uevents of the devices are processed by udevd 
while the automounter (udisks or something different in my case) is currently 
trying to mount the device triggered by a previous add or change uevent.

I've to questions to this issue. Maybe someone of you can help me:

1. Is there any particular reason why cdrom_id should open the device 
exclusively (especially since it is not opened exclusively when it is already 
mounted)?

2. If there is any good reason to keep this behavior: How is the best way for 
an automounter to deal with this? Retry? Something different? 


Thx in advance for valuable input.


Best regards

Marko Hoyer

Advanced Driver Information Technology GmbH
Software Group II (ADITG/SW2)
Robert-Bosch-Str. 200
31139 Hildesheim
Germany

Tel. +49 5121 49 6948
Fax +49 5121 49 6999
mho...@de.adit-jv.com

ADIT is a joint venture company of Robert Bosch GmbH/Robert Bosch Car 
Multimedia GmbH and DENSO Corporation
Sitz: Hildesheim, Registergericht: Amtsgericht Hildesheim HRB 3438
Geschaeftsfuehrung: Wilhelm Grabow, Katsuyoshi Maeda
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] cdrom_id opens device with O_EXCL, why?

2014-09-18 Thread Hoyer, Marko (ADITG/SW2)
 -Original Message-
 From: Hoyer, Marko (ADITG/SW2)
 Sent: Thursday, September 18, 2014 8:22 AM
 To: systemd-devel@lists.freedesktop.org
 Subject: cdrom_id opens device with O_EXCL, why?
 
 Hello together,
 
 I recently stumbled over cdrom_id opening the device with the O_EXCL flag set,
 if it is not currently mounted:
 
 fd = open(node, O_RDONLY|O_NONBLOCK|(is_mounted(node) ? 0 : O_EXCL));
 
 The effect of this is that automatically mounting a cdrom sometimes results in
 resource busy, if change uevents of the devices are processed by udevd
 while the automounter (udisks or something different in my case) is currently
 trying to mount the device triggered by a previous add or change uevent.
 
 I've to questions to this issue. Maybe someone of you can help me:
 
 1. Is there any particular reason why cdrom_id should open the device
 exclusively (especially since it is not opened exclusively when it is already
 mounted)?
 
 2. If there is any good reason to keep this behavior: How is the best way for
 an automounter to deal with this? Retry? Something different?
 
 
 Thx in advance for valuable input.
 

There is one additional more general issue with the behavior of cdrom_id.

- Insert a cdrom and mount it.
- cd into the mounted subtree of the cdrom
- do an lazy unmount (umount -l /dev/sr0)

From now on, cdrom_id will fail completely due to the following reasons:
- the bash which cded into the mounted sub tree creates busy i-nodes
- this keeps the kernel from releasing the /dev/sr0 node
- the lazy umount appears to the userspace as if nothing is mounted any more 
(no entry in /proc/self/mountinfo, which is evaluated by cdrom_id)
- due to this, cdrom_id tries to open the device exclusively, which fails
- after 20 retries, cdrom_id finally fails

The kernel itself is able to deal with this issue. Even though we have this 
busy i-nodes hanging around in the back, it allows mounting the cd drive again 
at a different position. So the only blocker seems to me is cdrom_id, failing 
opening the device exclusively.

Any comments?

Best regards

Marko Hoyer
Software Group II (ADITG/SW2)

Tel. +49 5121 49 6948

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] cdrom_id opens device with O_EXCL, why?

2014-09-18 Thread Hoyer, Marko (ADITG/SW2)
 -Original Message-
 From: David Herrmann [mailto:dh.herrm...@gmail.com]
 Sent: Thursday, September 18, 2014 10:31 AM
 To: Hoyer, Marko (ADITG/SW2)
 Cc: systemd-devel@lists.freedesktop.org
 Subject: Re: [systemd-devel] cdrom_id opens device with O_EXCL, why?
 
 Hi
 
 On Thu, Sep 18, 2014 at 8:22 AM, Hoyer, Marko (ADITG/SW2) mho...@de.adit-
 jv.com wrote:
  Hello together,
 
  I recently stumbled over cdrom_id opening the device with the O_EXCL flag
 set, if it is not currently mounted:
 
  fd = open(node, O_RDONLY|O_NONBLOCK|(is_mounted(node) ? 0 : O_EXCL));
 
  The effect of this is that automatically mounting a cdrom sometimes results
 in resource busy, if change uevents of the devices are processed by udevd
 while the automounter (udisks or something different in my case) is currently
 trying to mount the device triggered by a previous add or change uevent.
 
  I've to questions to this issue. Maybe someone of you can help me:
 
  1. Is there any particular reason why cdrom_id should open the device
 exclusively (especially since it is not opened exclusively when it is already
 mounted)?
 
  2. If there is any good reason to keep this behavior: How is the best way
 for an automounter to deal with this? Retry? Something different?
 
 This was introduced in:
 
 commit 38a3cde11bc77af49a96245b8a8a0f2b583a344c
 Author: Kay Sievers kay.siev...@vrfy.org
 Date:   Thu Mar 18 11:14:32 2010 +0100
 
 cdrom_id: open non-mounted optical media with O_EXCL
 
 This should prevent confusing drives during CD burning sessions. Based
 on a patch from Harald Hoyer.
 
 
 According to this commit we use O_EXCL to not confuse burning-sessions that
 might run in parallel. Admittedly, burning-sessions should usually open the
 device themselves via O_EXCL, or lock critical sections via scsi, but I guess
 back in the days we had to work around bugs in other programs (or I may be
 missing something non-obvious; I'm not that much into scsi..).
 
 Regarding what do to: If you deal with cdrom devices and get EBUSY, I'd simply
 ignore the device. In case of udev, you get a uevent once udev is done, and
 usually that means the device changed in some way so it's *good* you wait for
 it to be processed. So your automounter will get a uevent once udev is done
 and can then try to mount the device.
 
 Thanks
 David

Thx for the answer.

The automounter is listening to the udev socket so it is actually waiting for 
the event to be processed completely. But unfortunately, it appears that 
sequences of change events might come in a short time frame. So the automounter 
is trying to mount the device triggered by the first one, while udevd is 
currently processing the second one. That's what's happening actually in my 
case. Besides that the issue with cdrom_id not working in case of a lazy 
unmount (described in my second mail) seems to me to be a bit more critical.



___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] cdrom_id opens device with O_EXCL, why?

2014-09-18 Thread Hoyer, Marko (ADITG/SW2)
 -Original Message-
 From: David Herrmann [mailto:dh.herrm...@gmail.com]
 Sent: Thursday, September 18, 2014 1:57 PM
 To: Hoyer, Marko (ADITG/SW2)
 Cc: systemd-devel@lists.freedesktop.org; Harald Hoyer; Kay Sievers
 Subject: Re: [systemd-devel] cdrom_id opens device with O_EXCL, why?
 
 Hi again
 
 On Thu, Sep 18, 2014 at 1:34 PM, David Herrmann dh.herrm...@gmail.com wrote:
  I'm putting Harald and Kay on CC, as they added O_EXCL to protect
  against parallel burning-sessions. Maybe they can tell you whether
  that is still needed today and whether we can drop it.
 
 So my conception of O_EXCL was kinda wrong. O_EXCL on block devices fails with
 EBUSY if, and only if, there's someone else also opening the device with
 O_EXCL. You can still open it without O_EXCL. Now, the kernel-internal mount
 helpers always keep O_EXCL for mounted block devices. This way, user-space can
 open block devices via O_EXCL and be sure no-one can mount it in parallel.
 

Yeah that's true, that's what I found as well. But exactly this parallel try to 
mount
the device while cdrom_id is working is the problem here. I understand that it 
is needed for
some processes to get exclusive access to a device for good reasons (to prevent 
mounting for instance).
I'd like to understand the reason for cdrom_id especially since it gets only 
exclusive access if no
one mounted the device before.

 For your automounter, this means you should just drop the event on EBUSY. If
 udev was the offender, you will get a follow-up event. If fsck was the
 offender, you're screwed anyway as it takes ages to complete (but fsck
 shouldn't be any problem for the automounter, anyway). if anyone else is the
 offender, you have no idea what they do, so you should just fail right away.

I've to think about just kicking the event. My first guess is that it is a kind 
of complicated
because there is a comparable complex state machine in the background. My way 
out would be probably
implementing a retry mechanism for the mounting. That would cover all you 
suggested as well, I guess.

 
 Regarding lazy-unmount: It'd require kernel support to notice such usage in
 user-space. I don't plan on working on this (and nobody really cares). But if
 there will be a kernel-interface, we'd gladly accept patches to fix cdrom_id.

I'm not really happy with the lazy umount mechanism as well especially with the 
completely untransparent behavior in the background.
But in my case it seems to be the only way out. The thing is that I cannot 
guarantee that some applications just keeps i-nodes of
a mounted sd-card for instance (just cding into the subtree is enough). But I 
have to react somehow if the card is removed. There are two options:

1. I'm not unmounting the partition.
- In principal, this is somehow ok (not nice), any further IO access 
leads to IO errors. 
- Since the mount point is still available, other applications can 
enter as well (ok, but not nice)
- The shit happens when now someone inserts another sd card
- I'm not able to mount the partition (with same number) of 
the new sda card since the device node is still mounted
- Applications entering the mount point are getting lots of 
rubbish when they access i-nodes linked to cached parent nodes (the files in 
the root dir for instance)
So this is really not a good way out here.

2. I'm lazy umounting the partition so that the scope is really only limited to 
applications that are keeping i-nodes and only to the i-nodes that are kept.
- No other application can access the invalid cached i-nodes any more
- I can mount a new sd card without any problems even if we have the 
same partition number
- The kept i-nodes are removed automatically when the application 
decides to release them
- Part of the shit remains:
- The application keeping the i-node are getting still rubbish 
in case a new card is inserted. But only this one application.
Well the world is not perfect.

So back to the initial question. Is there still a valid reason why cdrom_id is 
acquiring exclusive access? The cd buring case is at least for me not really a 
problem. If this is the only reason I could simply locally patch it out. But to 
be honest, I can't really imagine how this mechanism could even help with this 
use case ...

 
 Thanks
 David

Regards,

Marko

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Improving module loading

2014-12-20 Thread Hoyer, Marko (ADITG/SW2)
Hi,

 -Original Message-
 From: systemd-devel [mailto:systemd-devel-
 boun...@lists.freedesktop.org] On Behalf Of Umut Tezduyar Lindskog
 Sent: Tuesday, December 16, 2014 4:55 PM
 To: systemd-devel@lists.freedesktop.org
 Subject: [systemd-devel] Improving module loading
 
 Hi,
 
 Is there a reason why systemd-modules-load is loading modules
 sequentially? Few things can happen simultaneously like resolving the
 symbols etc. Seems like modules_mutex is common on module loads which
 gets locked up on few occasions throughout the execution of
 sys_init_module.

We are actually doing this (in embedded systems which need to be up very fast 
with limited resources) and gained a lot. Mainly, IO and CPU can be better 
utilized by loading modules in parallel (one module is loaded while another one 
probes for hardware or is doing memory initialization).

 
 The other thought is, what is the preferred way of loading modules when
 they are needed. Do they have to be loaded on ExecStartPre= or as a
 separate service which has ExecStart that uses kmod to load them?
 Wouldn't it be useful to have something like ExecStartModule=?

I had such a discussion earlier with some of the systemd guys. My intention was 
to introduce an additional unit for module loading for exactly the reason you 
mentioned. The following (reasonable) outcome was:
- It is dangerous to load kernel modules from PID 1 since module loading can 
get stuck
- Since modules are actually loaded with the thread that calls the syscall, 
systemd would need additional threads
- Multi Threading is not really aimed in systemd for stability reasons
The probably safest way to do what you intended is to use an additional process 
to load your modules, which could be easily done by using ExecStartPre= in a 
service file. We are doing it exactly this way not with kmod but with a tool 
that loads modules in parallel.

Btw: Be careful with synchronization. We found that lots of kernel modules are 
exporting device nodes in the background (alsa, some graphics driver, ...). 
With the proceeding mentioned above, you are moving the kernel module loading 
and the actual use of the driver interface very close together in time. This 
might lead to race conditions. It is even worse when you need to access sys 
attributes, which are exported by some drivers even after the device is already 
available and uevents have been sent out. For such modules, there actually is 
no other way for synchronization but waiting for the attributes to appear.

 
 Umut
 ___
 systemd-devel mailing list
 systemd-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/systemd-devel

Best regards

Marko Hoyer
Software Group II (ADITG/SW2)

Tel. +49 5121 49 6948

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Improving module loading

2014-12-21 Thread Hoyer, Marko (ADITG/SW2)
 -Original Message-
 From: Greg KH [mailto:gre...@linuxfoundation.org]
 Sent: Saturday, December 20, 2014 6:11 PM
 To: Hoyer, Marko (ADITG/SW2)
 Cc: Umut Tezduyar Lindskog; systemd-devel@lists.freedesktop.org
 Subject: Re: [systemd-devel] Improving module loading
 
 On Sat, Dec 20, 2014 at 10:45:34AM +, Hoyer, Marko (ADITG/SW2)
 wrote:
  Hi,
 
   -Original Message-
   From: systemd-devel [mailto:systemd-devel-
   boun...@lists.freedesktop.org] On Behalf Of Umut Tezduyar Lindskog
   Sent: Tuesday, December 16, 2014 4:55 PM
   To: systemd-devel@lists.freedesktop.org
   Subject: [systemd-devel] Improving module loading
  
   Hi,
  
   Is there a reason why systemd-modules-load is loading modules
   sequentially? Few things can happen simultaneously like resolving
   the symbols etc. Seems like modules_mutex is common on module loads
   which gets locked up on few occasions throughout the execution of
   sys_init_module.
 
  We are actually doing this (in embedded systems which need to be up
  very fast with limited resources) and gained a lot. Mainly, IO and
 CPU
  can be better utilized by loading modules in parallel (one module is
  loaded while another one probes for hardware or is doing memory
  initialization).
 
 If you have control over your kernel, why not just build the modules
 into the kernel, then all of this isn't an issue at all and there is no
 overhead of module loading?

It is a questions of kernel image size and startup performance.
- We are somehow limited in terms of size from where we are loading the kernel.
- Loading the image is a kind of monolithic block in terms of time where you 
can hardly do things in parallel
- We are strongly following the idea from Umut (loading things not before they 
are actually needed) to get up early services very early (e.g. rendering a 
camera on a display in less than 2secs after power on)
- Some modules do time / CPU consuming things in init(), which would delay the 
entry time into userspace
- deferred init calls are not really a solution because they cannot be 
controlled in the needed granularity

So finally it is actually a trade of between compiling things in and spending 
the overhead of module loading to gain the flexibility to load things later.
 
 
 greg k-h



Best regards

Marko Hoyer
Software Group II (ADITG/SW2)

Tel. +49 5121 49 6948
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Improving module loading

2014-12-21 Thread Hoyer, Marko (ADITG/SW2)
 -Original Message-
 From: Umut Tezduyar Lindskog [mailto:u...@tezduyar.com]
 Sent: Saturday, December 20, 2014 6:45 PM
 To: Hoyer, Marko (ADITG/SW2)
 Cc: systemd-devel@lists.freedesktop.org
 Subject: Re: [systemd-devel] Improving module loading
 
 Hi Marko,
 
 Thank you very much for your feedback!

You're welcome ;)

 
 On Sat, Dec 20, 2014 at 11:45 AM, Hoyer, Marko (ADITG/SW2)
 mho...@de.adit-jv.com wrote:
  Hi,
 
  -Original Message-
  From: systemd-devel [mailto:systemd-devel-
  boun...@lists.freedesktop.org] On Behalf Of Umut Tezduyar Lindskog
  Sent: Tuesday, December 16, 2014 4:55 PM
  To: systemd-devel@lists.freedesktop.org
  Subject: [systemd-devel] Improving module loading
 
  Hi,
 
  Is there a reason why systemd-modules-load is loading modules
  sequentially? Few things can happen simultaneously like resolving
 the
  symbols etc. Seems like modules_mutex is common on module loads
 which
  gets locked up on few occasions throughout the execution of
  sys_init_module.
 
  We are actually doing this (in embedded systems which need to be up
 very fast with limited resources) and gained a lot. Mainly, IO and CPU
 can be better utilized by loading modules in parallel (one module is
 loaded while another one probes for hardware or is doing memory
 initialization).
 
 
  The other thought is, what is the preferred way of loading modules
  when they are needed. Do they have to be loaded on ExecStartPre= or
  as a separate service which has ExecStart that uses kmod to load
 them?
  Wouldn't it be useful to have something like ExecStartModule=?
 
  I had such a discussion earlier with some of the systemd guys. My
 intention was to introduce an additional unit for module loading for
 exactly the reason you mentioned. The following (reasonable) outcome
 was:
 
 Do you have links for the discussions, I cannot find them.

Actually not, sorry. The discussion was not done via any mailing list.

 systemd already has a service that loads the modules.

Sorry, there is a word missing in my sentence above. My idea was not to 
introduce a unit for modules loading but an own unit type, such as 
.kmodule. The idea was to define .kmodule units to load one or a set of kernel 
modules each at a certain point during startup by just integrating them into 
the startup dependency tree. This idea would require integrating kind of worker 
threads into systemd. The outcome was as summarized below.

The advantages over systemd-modules-load are:
- modules can be loaded in parallel
- different sets of modules can be loaded at different points in time during 
startup

 
  - It is dangerous to load kernel modules from PID 1 since module
  loading can get stuck
  - Since modules are actually loaded with the thread that calls the
  syscall, systemd would need additional threads
  - Multi Threading is not really aimed in systemd for stability
 reasons
  The probably safest way to do what you intended is to use an
 additional process to load your modules, which could be easily done by
 using ExecStartPre= in a service file. We are doing it exactly this way
 not with kmod but with a tool that loads modules in parallel.
 
  Btw: Be careful with synchronization. We found that lots of kernel
 modules are exporting device nodes in the background (alsa, some
 graphics driver, ...). With the proceeding mentioned above, you are
 moving the kernel module loading and the actual use of the driver
 interface very close together in time. This might lead to race
 conditions. It is even worse when you need to access sys attributes,
 which are exported by some drivers even after the device is already
 available and uevents have been sent out. For such modules, there
 actually is no other way for synchronization but waiting for the
 attributes to appear.
 
 We are aware of the potential complications and races. But good to be
 reminded :)
 

;) We actually stumbled over lots of things here while we rolled out this 
approach. Sometimes it is really funny that simple questions such as What does 
your service actually need? are hard to answer. It seems that sometimes things 
are working more or less accidently due to the fact that the udev trigger comes 
very early compared to the startup of the services.

 Umut
 
 
 
  Umut
  ___
  systemd-devel mailing list
  systemd-devel@lists.freedesktop.org
  http://lists.freedesktop.org/mailman/listinfo/systemd-devel
 
  Best regards
 
  Marko Hoyer
  Software Group II (ADITG/SW2)
 
  Tel. +49 5121 49 6948
 


Best regards

Marko Hoyer
Software Group II (ADITG/SW2)

Tel. +49 5121 49 6948
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Improving module loading

2014-12-21 Thread Hoyer, Marko (ADITG/SW2)
 -Original Message-
 From: systemd-devel [mailto:systemd-devel-
 boun...@lists.freedesktop.org] On Behalf Of Tom Gundersen
 Sent: Saturday, December 20, 2014 4:57 PM
 To: Umut Tezduyar
 Cc: systemd Mailing List
 Subject: Re: [systemd-devel] Improving module loading
 
 
 On 16 Dec 2014 17:21, Umut Tezduyar Lindskog u...@tezduyar.com
 wrote:
 
  On Tue, Dec 16, 2014 at 4:59 PM, Tom Gundersen t...@jklm.no wrote:
   On Tue, Dec 16, 2014 at 4:54 PM, Umut Tezduyar Lindskog
   u...@tezduyar.com wrote:
   The other thought is, what is the preferred way of loading modules
   when they are needed.
  
   Rely on kernel autoloading. Not all modules support that yet, but
   most do. What do you have in mind?
 
  We have some modules that we don't need them to be loaded so early.
 We
  much prefer them to be loaded when they are needed. For example we
  don't need to load the SD driver module until the service that uses
 SD
  driver is starting. With this idea in mind I started some
  investigation. Then I realized that our CPU utilization is not that
  high during module loading and I blame it to the sequential loading
 of
  modules. I am thinking this can be improved on systemd-modules-load
  side.
 
 We can probably improve the module loading by making it use worker
 processes similar to how udev works.

We realized it with threads, which are much cheaper for this job.

 In principle this could cause
 problems with things making assumptions on the order of module loading,
 so that is something to keep in mind.

Mmm, I don't see any issues here since the dependencies are normally properly 
described on kernel side (otherwise you have a problem in any case). In worst 
case you are losing potential to parallelize loading of modules if your 
algorithm for distributing the modules to workers is not working efficiently.

 That said, note that most modules
 will be loaded by udev which already does it in parallel...

... only if you are still triggering add uevent through the complete device 
tree during startup, which is really expensive and does not go with the load 
things not before they are actually needed philosophy very well ...

 
   Do they have to be loaded on ExecStartPre= or as a separate
 service
   which has ExecStart that uses kmod to load them?
   Wouldn't it be useful to have something like ExecStartModule=?
  
   I'd much rather we improve the autoloading support...
 
  My understanding is autoloading support is loading a module if the
  hardware is available.
 
 That, or for non-hardware modules when the functionally is first used
 (networking, filesystems, ...).
 
  What I am after is though loading the module when they are needed.
 
 This sounds really fragile to me (having to encode this dependency
 everywhere rather than just always assume the functionality is
 available).

That is actually the main challenge when this approach is applied. But the 
assumption you are talking about is in many cases a kind of facade only at 
least if your applications
- are not waiting for udev to completely settle after the coldplug trigger, or
- are able to deal with devices in a hotplug fashion.

 
 Cheers,
 
 Tom


Best regards

Marko Hoyer
Software Group II (ADITG/SW2)

Tel. +49 5121 49 6948

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Improving module loading

2014-12-21 Thread Hoyer, Marko (ADITG/SW2)
 -Original Message-
 From: Ivan Shapovalov [mailto:intelfx...@gmail.com]
 Sent: Sunday, December 21, 2014 3:26 PM
 To: systemd-devel@lists.freedesktop.org
 Cc: Hoyer, Marko (ADITG/SW2); Umut Tezduyar Lindskog
 Subject: Re: [systemd-devel] Improving module loading
 
 On Sunday, December 21, 2014 at 01:03:36 PM, Hoyer, Marko wrote:
   -Original Message-
   From: Umut Tezduyar Lindskog [mailto:u...@tezduyar.com]
   Sent: Saturday, December 20, 2014 6:45 PM
   To: Hoyer, Marko (ADITG/SW2)
   Cc: systemd-devel@lists.freedesktop.org
   Subject: Re: [systemd-devel] Improving module loading
  
   [...]
I had such a discussion earlier with some of the systemd guys. My
   intention was to introduce an additional unit for module loading
 for
   exactly the reason you mentioned. The following (reasonable)
 outcome
   was:
  
   Do you have links for the discussions, I cannot find them.
 
  Actually not, sorry. The discussion was not done via any mailing
 list.
 
   systemd already has a service that loads the modules.
 
  Sorry, there is a word missing in my sentence above. My idea was not
 to introduce a unit for modules loading but an own unit type, such
 as .kmodule. The idea was to define .kmodule units to load one or a set
 of kernel modules each at a certain point during startup by just
 integrating them into the startup dependency tree. This idea would
 require integrating kind of worker threads into systemd. The outcome
 was as summarized below.
 
 Why would you need a separate unit type for that?
 
 load-module@.service:
 
 [Unit]
 Description=Load kernel module %I
 DefaultDependencies=no
 
 [Service]
 Type=oneshot
 RemainAfterExit=yes
 ExecStart=/usr/bin/modprobe %I

To prevent forking a process for that ... We earlier had some issue with 
cgroups in the kernel, which caused between 20 and 60ms delay per process 
executed by systemd. 

But actually we are doing it now exactly this way but not with modprobe but 
another tool, which can load modules in parallel, takes care for 
synchronization (devices and attributes), and does some other stuff as well ...

In some cases, we don't even have an additional unit for that. We are just 
putting the kmod call with an ExecStartPre= Statement into the service file, 
which requires the module / modules being load before. 

 
 ...then add a dependency like Required=load-module@foo.service and
 After=load-module@foo.service.
 
 --
 Ivan Shapovalov / intelfx /


Best regards

Marko Hoyer
Software Group II (ADITG/SW2)

Tel. +49 5121 49 6948
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Improving module loading

2014-12-23 Thread Hoyer, Marko (ADITG/SW2)
Hi Greg,

thx a lot for the feedback and hints. You asked for lots of numbers, I tried to 
add some I have available here at the moment. Find them inline. I'm 
additionally interested in some more details of some of the ideas you outlined. 
Would be nice if you could go some more into details at certain points. I added 
some questions inline as well.

 -Original Message-
 From: Greg KH [mailto:gre...@linuxfoundation.org]
 Sent: Sunday, December 21, 2014 6:47 PM
 To: Hoyer, Marko (ADITG/SW2)
 Cc: Umut Tezduyar Lindskog; systemd-devel@lists.freedesktop.org
 Subject: Re: [systemd-devel] Improving module loading
 
 On Sun, Dec 21, 2014 at 12:31:30PM +, Hoyer, Marko (ADITG/SW2)
 wrote:
   If you have control over your kernel, why not just build the
 modules
   into the kernel, then all of this isn't an issue at all and there
 is
   no overhead of module loading?
 
  It is a questions of kernel image size and startup performance.
  - We are somehow limited in terms of size from where we are loading
 the kernel.
 
 What do you mean by this?  What is limiting this?  What is your limit?
 How large are these kernel modules that you are having a hard time to
 build into your kernel image?
- As far as I remember, we have special fastboot aware partitions on the emmc 
that are available very fast. But those are very limited in size. But with this 
point I'm pretty much not sure. This is something I got told.

- targeted kernel size: 2-3MB packed

- Kernel modules:
- we have heavy graphics drivers (~800kb, stripped), they are needed 
half the way at startup
- video processing unit drivers (don't know the size), they are needed 
half the way at startup
- wireless  bluetooth, they are needed very late
- usb subsystem, conventionally needed very late (but this finally 
depends on the concrete product)
- hot plug mass storage handling, conventionally needed very late (but 
this finally depends on the concrete product)
- audio driver, in most of our products needed very late
- some drivers for INC communication (partly needed very early - we 
compiled in them, partly needed later - we have them as module)

All in all I'd guess we are getting twice the size if we would compile in all 
the stuff.

 
  - Loading the image is a kind of monolithic block in terms of time
  where you can hardly do things in parallel
 
 How long does loading a tiny kernel image actually take?

I don't know exact numbers, sorry. I guess something between 50-200ms plus time 
for unpacking. But this loading and unpacking job is important since it is 
located directly on the critical path.

 
  - We are strongly following the idea from Umut (loading things not
  before they are actually needed) to get up early services very early
  (e.g. rendering a camera on a display in less than 2secs after power
  on)
 
 Ah, IVI, you all have some really strange hardware configurations :(

Yes IVI. Since we are developing our hardware as well as our software 
(different department), I'm interested in getting more infos about what is 
strange about IVI hardware configuration in general. Maybe we can improve 
things to a certain extent. Could you go more into details?

 
 There is no reason you have to do a cold reset to get your boot times
 down, there is the fun resume from a system image solution that
 others have done that can get that camera up and displayed in
 milliseconds.
 

I'm interested in this point.
- Are you talking about Save To RAM, Save to Disk, or a hybrid combination 
of both?
- Or do you have something completely different in mind?

I personally thought about such a solution as well. I'm by now not fully 
convinced since we have really hard timing requirements (partly motivated by 
law). So I see two different principal ways for a resume solution:
- either the resume solution is robust enough to guarantee to come up properly 
every boot up
- achieved for instance by a static system image that brings the system 
into a static state very fast, from which on a kind of conventional boot is 
going on then ...
- or the boot up after an actual cold reset is fast enough to at least 
guarantee the really important timing requirements in case the resume is not 
coming up properly

  - Some modules do time / CPU consuming things in init(), which would
  delay the entry time into userspace
 
 Then fix them, that's the best thing about Linux, you have the source
 to not accept problems like this!  And no module should do expensive
 things in init(), we have been fixing issues like that for a while now.
 

This would be properly the cleanest solution. In a long term perspective we are 
of course going this way and we are trying to get suppliers to go this way with 
us as well. But finally, we have to bring up now products at a fixed date. So 
it sometimes is easier, and more stable to work around suboptimal things.

For instance:
- refactoring a driver that is doing lots of CPU

Re: [systemd-devel] Improving module loading

2014-12-23 Thread Hoyer, Marko (ADITG/SW2)
 -Original Message-
 From: Lucas De Marchi [mailto:lucas.de.mar...@gmail.com]
 Sent: Monday, December 22, 2014 7:00 PM
 To: Lennart Poettering
 Cc: Hoyer, Marko (ADITG/SW2); systemd-devel@lists.freedesktop.org
 Subject: Re: [systemd-devel] Improving module loading
 
 On Mon, Dec 22, 2014 at 1:04 PM, Lennart Poettering
 lenn...@poettering.net wrote:
  On Sat, 20.12.14 10:45, Hoyer, Marko (ADITG/SW2) (mho...@de.adit-
 jv.com) wrote:
 
  I had such a discussion earlier with some of the systemd guys. My
  intention was to introduce an additional unit for module loading for
  exactly the reason you mentioned. The following (reasonable) outcome
  was:
 
  - It is dangerous to load kernel modules from PID 1 since module
loading can get stuck
 
  - Since modules are actually loaded with the thread that calls the
syscall, systemd would need additional threads
 
  - Multi Threading is not really aimed in systemd for stability
  reasons
 
  The probably safest way to do what you intended is to use an
  additional process to load your modules, which could be easily done
  by using ExecStartPre= in a service file. We are doing it exactly
  this way not with kmod but with a tool that loads modules in
  parallel.
 
  I'd be willing to merge a good patch that beefs up
  systemd-modules-load to load the specified modules in parallel, with
  one thread for each.
 
  We already have a very limited number of threaded bits in systemd,
 and
  I figure out would be OK to do that for this too.
 
  Please keep the threading minimal though, i.e. one kmod context per
  thread, so that we need no synchronization and no locking. One thread
  per module, i.e. no worker thread logic with thread reusing. also,
  please set a thred name, so that hanging module loading only hang one
  specific thread and the backtrace shows which module is at fault.
 
 I'm skeptical you would get any speed up for that. I think it would be
 better to have some numbers shared before merging such a thing.
 

As I already outlined in my answer to Greg, the parallel loading was not our 
main motivation for inventing something new. We found that for some of our 
modules parallel loading gained us benefit, so we integrated this feature. 
Since we are not using udevd during startup at all, most of our modules are 
loaded manually. I've no idea how things are distributed between 
systemd-modules-load and udevd in conventional Linux desktop or server systems. 
If only a hand full of modules are actually loaded using systemd-modules-load, 
it is probably not worth optimizing at this end.

Has someone concrete numbers how many modules are loaded by hand using 
systemd-modules-load in a conventional system?

 If you have 1 context per module/thread you will need to initialize
 each context which is really the most expensive part in userspace,
 particularly if finit_module() is being used (which you should unless
 you have restrictions on the physical size taken by the modules). Bare
 in mind the udev logic has only 1 context, so the initialization is
 amortized among the multiple module load calls.
 

This does not really meet my experience. Once the kmod binary cache is in the 
VFS page buffer cache, it is really fast getting a new context even in new 
processes. The expensive thing about udev is that it starts very fast forking 
off worker processes. So at least one new context per process is created 
finally too. Additionally, the people who decide to use systemd-modules-load to 
load specific modules have good reasons for that. A prominent one is probably 
that udevd is not working for the respective module because no concrete device 
is coupled with it. I think we do not have so many kernel modules, which need 
to be handled like this which brings us again to the question if it is really 
worth pimping systemd-modules-load. 

 For the don't load until it's needed I very much prefer the static
 nodes approach we have. Shouldn't this be used instead of filling
 modules-load-d with lots of entries?

We are not using systemd-modules-load for applying this approach since it is 
trying to load all modules in one shot. We are executing our tool several times 
during startup to get up hardware piece by piece exactly at the point where it 
is needed. The tool is either executed like modprobe or with a configuration 
file containing a set of modules to be loaded in one shot and some other stuff 
needed for synchronization and setup.

 
 I really miss numbers here and more information on which modules are
 taking long because they are serialized.
 
 --
 Lucas De Marchi


Best regards

Marko Hoyer
Software Group II (ADITG/SW2)

Tel. +49 5121 49 6948
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Support for staged startup

2015-02-02 Thread Hoyer, Marko (ADITG/SW2)
Hello,

thx for the answer. 


 If you do not use --no-block to start your second target, first
 target will never finish.

That's something I cannot confirm. If you define the service, which is calling 
systemctl start xxx, as oneshot the service will be in state activating for 
exactly the time needed to bring up the target.


 Other caveat of your way is that systemd doesn't know about your final
 target until it receives systemctl start destination.target. Since it
 doesn't know about the target, units that are requested by
 destination.target will not have the default dependencies applied.

That's true but wanted in my case ;)

Best regards

Marko Hoyer
Software Group II (ADITG/SW2)

Tel. +49 5121 49 6948
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Support for staged startup

2015-02-02 Thread Hoyer, Marko (ADITG/SW2)
Hello,

thx for the answer.

 Why not start the final sub-tree units the conventional way, but make
 them all wait, listening on sockets?A final service need not
 contain a 'systemctl start xxx.target' command, as instead it could
 simply write a message to those sockets.  Some services could receive a
 signal telling them to terminate, and others telling them to continue.
 

Ok good idea. But finally this is (effort wise) the same like calling systemctl 
(kick off one process, which communicates via IPC with another one). If it is 
now my own socket or the systemd control socket does not really matter.
 
 Given that it's possible to specify the startup service in the kernel
 command line with system.unit=,  the engineer configuring the startup
 sequence could specify a variety of alternate dependency
 trees.Each tree would have a different unit at its head.The
 units in one tree need not appear in another at all, or they could
 appear in the second tree in a different order.
 

That's possible as well but might get a bit too complex for a bit more 
dynamics. This finally depends probably on the actual use case.

Ok I guess what I learned from your answers now is that the systemctl approach 
is probably the best one for my concrete use case. Thx all for sharing your 
thoughts.


Best regards

Marko Hoyer
Software Group II (ADITG/SW2)

Tel. +49 5121 49 6948
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Support for staged startup

2015-01-29 Thread Hoyer, Marko (ADITG/SW2)
Hi all,

I'd like to realize a staged startup with systemd which is mainly about:
- starting up a static tree up to a final service
- the only job of the final service is to kick off the start of an additional 
sub tree of units

This kind of startup could be realized simply by adding an additional one shot 
service which executes: systemctl start xxx.target

My question now is:
Is this the appropriate way for realizing this or is there any other more 
elegant way. My problem with this approach is that systemd executes a binary 
(systemctl), which communicates with systemd just to queue a new job in 
systemd. Sounds as if there should be a way which is a bit more direct ...

Thx for any feedback.

Best regards

Marko Hoyer

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Service watchdog feature in state ACTIVATING ?

2015-03-02 Thread Hoyer, Marko (ADITG/SW2)
Hi Umut,

thx for answering

 -Original Message-
 From: Umut Tezduyar Lindskog [mailto:u...@tezduyar.com]
 Sent: Monday, March 02, 2015 8:51 PM
 To: Hoyer, Marko (ADITG/SW2)
 Cc: systemd-devel@lists.freedesktop.org
 Subject: Re: [systemd-devel] Service watchdog feature in state
 ACTIVATING ?
 
 Hi Marko,
 
 On Sunday, March 1, 2015, Hoyer, Marko (ADITG/SW2) mho...@de.adit-
 jv.com wrote:
 
 
   Hi,
 
   I ran into a use case where the activation phase of a service
 takes significantly longer than the desired watchdog period
 (Activating: 10-20secs, Watchdog: 1-5secs).
 
   I found out that the watchdog features starts not before the
 service is in state START_POST. This means for my use case that the
 system is blind for 10-20secs (whatever I set as startup timeout here).
 
   Is there any way to activate the watchdog feature already in
 phase ACTIVATING?
 
 
 Why would you need this? Watchdog is to prevent system being stuck
 somewhere. If activation fails within TimeoutStartSec=, systemd will
 put the service in failed to activate state anyways.
 
 Is waiting 20 seconds to detect the freeze is too much for your case?
 Is it not possible to lower the activation time?

Yes it is too long. The process is a kind of application starter and observer 
processing a startup phase. It is responsible for:
- observing the application internal states of upcoming applications
- it kicks off the startup of applications depending on the internal state of 
other once
- it delays the startup of late services started the normal way by systemd once 
the startup phase is done

And yes, one could say that we are implementing a kind of second systemd 
started by systemd. The difference is that our starter knows about some more 
states than just ACTIVATING and RUNING which is not really realizable with 
systemd especially when more than one application is contained in one process.

So the idea was to bring up a set of applications with our starter application 
and stay with our starter in state ACTIVATING until it is done with bringing up 
the set of applications.

Depending on the product, bringing up our set of applications can take 
10-20secs. Since the starter is a kind of sensible application, it needs to be 
supervised by a watchdog with a timeout far less than 20secs.

Hope this gives a rough picture of our use case.

 
 Umut
 
 
   I guess there should be a second watchdog configuration parameter
 to allow services to use different values for the states ACTIVATING and
 RUNING. Otherwise, people who are not interested in having a watchdog
 observation during startup will run into troubles ...
 
   Any opinions on that?
 
 
   Best regards
 
   Marko Hoyer
 
   Advanced Driver Information Technology GmbH
   Software Group II (ADITG/SW2)
   Robert-Bosch-Str. 200
   31139 Hildesheim
   Germany
 
   Tel. +49 5121 49 6948
   Fax +49 5121 49 6999
   mho...@de.adit-jv.com javascript:;
 
   ADIT is a joint venture company of Robert Bosch GmbH/Robert Bosch
 Car Multimedia GmbH and DENSO Corporation
   Sitz: Hildesheim, Registergericht: Amtsgericht Hildesheim HRB
 3438
   Geschaeftsfuehrung: Wilhelm Grabow, Katsuyoshi Maeda
   ___
   systemd-devel mailing list
   systemd-devel@lists.freedesktop.org javascript:;
   http://lists.freedesktop.org/mailman/listinfo/systemd-devel
 
Best regards

Marko Hoyer
Software Group II (ADITG/SW2)

Tel. +49 5121 49 6948
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Service watchdog feature in state ACTIVATING ?

2015-03-01 Thread Hoyer, Marko (ADITG/SW2)
Hi,

I ran into a use case where the activation phase of a service takes 
significantly longer than the desired watchdog period (Activating: 10-20secs, 
Watchdog: 1-5secs).

I found out that the watchdog features starts not before the service is in 
state START_POST. This means for my use case that the system is blind for 
10-20secs (whatever I set as startup timeout here).

Is there any way to activate the watchdog feature already in phase ACTIVATING?
I guess there should be a second watchdog configuration parameter to allow 
services to use different values for the states ACTIVATING and RUNING. 
Otherwise, people who are not interested in having a watchdog observation 
during startup will run into troubles ...

Any opinions on that?

Best regards

Marko Hoyer

Advanced Driver Information Technology GmbH
Software Group II (ADITG/SW2)
Robert-Bosch-Str. 200
31139 Hildesheim
Germany

Tel. +49 5121 49 6948
Fax +49 5121 49 6999
mho...@de.adit-jv.com

ADIT is a joint venture company of Robert Bosch GmbH/Robert Bosch Car 
Multimedia GmbH and DENSO Corporation
Sitz: Hildesheim, Registergericht: Amtsgericht Hildesheim HRB 3438
Geschaeftsfuehrung: Wilhelm Grabow, Katsuyoshi Maeda
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Support for staged startup

2015-01-29 Thread Hoyer, Marko (ADITG/SW2)
Hi Alison,

 -Original Message-
 From: Alison Chaiken [mailto:ali...@she-devel.com]
 Sent: Thursday, January 29, 2015 8:17 PM
 To: systemd-devel@lists.freedesktop.org
 Cc: Hoyer, Marko (ADITG/SW2)
 Subject: Re: Support for staged startup
 
 Marko Hoyer asks:
  I'd like to realize a staged startup with systemd which is mainly
 about:
  - starting up a static tree up to a final service
  - the only job of the final service is to kick off the start of an
  additional sub tree of units This kind of startup could be realized
  simply by adding an additional one shot service which executes:
  systemctl start xxx.target
 
 Marko, one target can already be specified as After another.   If
 B.target is present in one of the appropriate directories and specifies
 
 After=A.target
 
 and all the services of the final sub-tree are symlinked in a
 B.target.wants directory, doesn't the behavior you need result?   What
 is  missing?Of course, some of the units linked in B.target.wants
 may already be started by the time A.target completes if they are part
 of a earlier target or if they are needed by an earlier unit.   To
 suppress that behavior, you'd have to edit the individual units.
 
 I don't know of any use case for one unit to start another directly.
 Is there one?

1.) Coming up with a small tree first reduces the loading time of the unit set 
(not so much important in my case)

2.) If you wanna create some dynamics between target A and target B so that 
depending on the startup situation services are already started before A or in 
another round they are delayed until A is done, you probably need to disconnect 
them from the static startup tree and pull them in dynamically at the desired 
time.

 
 -- Alison
 
 --
 Alison Chaiken   ali...@she-devel.com
 650-279-5600
 http://{she-devel.com,exerciseforthereader.org}
 Never underestimate the cleverness of advertisers, or mischief makers,
 or criminals.  -- Don Norman

Best regards

Marko Hoyer
Software Group II (ADITG/SW2)

Tel. +49 5121 49 6948

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Service watchdog feature in state ACTIVATING ?

2015-04-22 Thread Hoyer, Marko (ADITG/SW2)
 -Original Message-
 From: Lennart Poettering [mailto:lenn...@poettering.net]
 Sent: Wednesday, April 22, 2015 6:00 PM
 To: Hoyer, Marko (ADITG/SW2)
 Cc: Umut Tezduyar Lindskog; systemd-devel@lists.freedesktop.org
 Subject: Re: [systemd-devel] Service watchdog feature in state
 ACTIVATING ?
 
 On Mon, 02.03.15 20:32, Hoyer, Marko (ADITG/SW2) (mho...@de.adit-
 jv.com) wrote:
 
   Why would you need this? Watchdog is to prevent system being stuck
   somewhere. If activation fails within TimeoutStartSec=, systemd
 will
   put the service in failed to activate state anyways.
  
   Is waiting 20 seconds to detect the freeze is too much for your
 case?
   Is it not possible to lower the activation time?
 
  Yes it is too long. The process is a kind of application starter and
 observer processing a startup phase. It is responsible for:
  - observing the application internal states of upcoming applications
  - it kicks off the startup of applications depending on the internal
  state of other once
  - it delays the startup of late services started the normal way by
  systemd once the startup phase is done
 
  And yes, one could say that we are implementing a kind of second
  systemd started by systemd. The difference is that our starter knows
  about some more states than just ACTIVATING and RUNING which is not
  really realizable with systemd especially when more than one
  application is contained in one process.
 
  So the idea was to bring up a set of applications with our starter
  application and stay with our starter in state ACTIVATING until it is
  done with bringing up the set of applications.
 
  Depending on the product, bringing up our set of applications can
 take
  10-20secs. Since the starter is a kind of sensible application, it
  needs to be supervised by a watchdog with a timeout far less than
  20secs.
 
  Hope this gives a rough picture of our use case.
 
 So, I can see that having watchdog support during the activating phase
 might make sense in this case, but I am not sure this case is strong
 enough to add it to systemd proper, since it would complicate things
 for most: it's not a behaviour we can just enable for everybody, it
 would have to be something we'd have to add an explicit opt-in option
 for, since most daemons don't work like this.

Thx for getting back to this again.

An a bit more light weight variant could be adding a second watchdog timeout 
parameter meant for the activation phase only. This timeout value could be 0 
(infinite) by default. This way, the feature is probably invisible for all who 
are not explicitly setting it. 

 Anyway, I'd prefer not adding support for this now. We can revisit this
 if more folks are asking for this, and this turns out to be a more
 common setup.

Sounds good.

Best regards

Marko Hoyer
Software Group II (ADITG/SW2)

Tel. +49 5121 49 6948
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Reduce unit-loading time

2015-05-13 Thread Hoyer, Marko (ADITG/SW2)
Hi,

 -Original Message-
 From: systemd-devel [mailto:systemd-devel-
 boun...@lists.freedesktop.org] On Behalf Of cee1
 Sent: Wednesday, May 13, 2015 11:52 AM
 To: systemd Mailing List
 Subject: [systemd-devel] Reduce unit-loading time
 
 Hi all,
 
 We're trying systemd to boot up an ARM board, and find systemd uses
 more than one second to load units.

This sounds far a bit too long to me. Our systemd comes up in an arm based 
system in about 200-300ms from executing init up to the first unit being 
executed.

 
 Comparing with the init of Android on the same board, it manages to
 boot the system very fast.
 
 We guess following factors are involved:
 1. systemd has a much bigger footprint than the init of Android: the
 latter is static linked, and is about 1xxKB (systemd is about 1.4MB,
 and is linked with libc/libcap/libpthread, etc)
 
 2. systemd spends quiet a while to read/parse unit files.

This depends on the numbers of units involved in the startup (finally connected 
in the dependency tree that ends in the default.target root). In our case, a 
comparable large unit set takes by about 40-60ms, not so long, I'd say.

 
 
 Any idea to reduce the unit-loading time?
 e.g. one-single file contains all units descriptions, or a compiled
 version(containing resolved dependencies, or even the boot up
 sequence)

Could you provide me some additional information about your system setup?

- Version of systemd
- Are you starting something in parallel to systemd which might take 
significant IO?
- Version of the kernel
- The kernel ticker frequency
- The enabled cgroups controllers

The last three points might sound a bit far away, but I've an idea in mind ...

Best regards

Marko Hoyer
Software Group II (ADITG/SW2)

Tel. +49 5121 49 6948

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Reason for setting runqueue to IDLE priority and side effects if this is changed?

2015-07-15 Thread Hoyer, Marko (ADITG/SW2)
Hi all,

jumping from systemd 206 to systemd 211 we were faced with some issue, which 
are finally caused by a changed main loop priority of the job execution. 

Our use case is the following one:
--
While we are starting up the system, a so called application starter is 
bringing up a set of applications at a certain point in a controlled way by 
requesting systemd via dbus to start respective units. The reason is that we 
have to take care for applications internal states as synchronization points 
and for better meeting timing requirements of certain applications in a generic 
way. I told the story once before in another post about watchdog observation in 
state activating. However ...

Up to v206, the behavior of systemd was the following one:
--
- the starter sends out a start request of a bench of applications (he requests 
a sequence of unit starts)
- it seems that systemd is working of the sequence exactly as it was requested 
one by one in the same order

Systemd v211 shows a different behavior:

- the starter sends out a bench of requests
- the job execution of the first job is significantly delayed (we have a system 
under stress, high CPU load at that time)
- suddenly, system starts working of the jobs but now in reverse order
- depending on the situation, it might happen that a complete bench of 
scheduled jobs are reverse ordered, sometimes two or more sub benches of jobs 
are executed in the reverse order (the jobs in each bench are reverse ordered)

I found that the system behavior with systemd v206 was only accidently the 
expected one. The reason was that in this version the run queue dispatching was 
a fixed part of the main loop located before the dispatching of the events. 
This way, it gained higher priority than the dbus request handling. One jobs 
was requested via dbus. Once the dbus job request was dispatched, it's been 
worked off immediately the next round of the mainloop. Then the next dbus 
request was dispatched and so on ...

Systemd v211 added the running queue as deferred event source to the event 
handling with priority IDLE. So dbus requests are preferred over job execution. 
The reverse order effect is simply because the run queue is more a stack than a 
queue. All of the observed behavior could be explained I guess.

So long story in advance. I've now two questions:
- Am I causing any critical side effects when I'm increasing the run queue 
priority so that it is higher than the one of the dbus handling (which is 
NORMAL)? First tests showed that I can get back exactly the behavior we had 
before with that.

- Might it still happen that situations are happening where the jobs are 
reordered even though I'm increasing the priority?

- Is there any other good solution ensuring the order of job execution?

Hope someone has some useful feedback.

Thx in advance.

Marko Hoyer

Advanced Driver Information Technology GmbH
Software Group II (ADITG/SW2)
Robert-Bosch-Str. 200
31139 Hildesheim
Germany

Tel. +49 5121 49 6948
Fax +49 5121 49 6999
mho...@de.adit-jv.com

ADIT is a joint venture company of Robert Bosch GmbH/Robert Bosch Car 
Multimedia GmbH and DENSO Corpoation
Sitz: Hildesheim, Registergericht: Amtsgericht Hildesheim HRB 3438
Geschaeftsfuehrung: Wilhelm Grabow, Katsuyoshi Maeda
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Reason for setting runqueue to IDLE priority and side effects if this is changed?

2015-07-17 Thread Hoyer, Marko (ADITG/SW2)
 
  Up to v206, the behavior of systemd was the following one:
  --
  - the starter sends out a start request of a bench of applications
 (he requests a sequence of unit starts)
 
 If you want to control order of execution yourself, why do you not wait
 for each unit to start before submitting next request?

The synchronization point is not 'Unit started' but 'Job started'. I'm 
searching for a good solution to detect that.

  So long story in advance. I've now two questions:
  - Am I causing any critical side effects when I'm increasing the run
 queue priority so that it is higher than the one of the dbus handling
 (which is NORMAL)? First tests showed that I can get back exactly the
 behavior we had before with that.
 
  - Might it still happen that situations are happening where the jobs
 are reordered even though I'm increasing the priority?
 
  - Is there any other good solution ensuring the order of job
 execution?
 
 
 systemd never promised anything about relative order of execution
 unless there is explicit dependency between units. So good solution is
 to put dependency in unit definitions. submit the whole bunch at once
 and let systemd sort the order.

Understood and agreed ;) Dependency is not an option because the
synchronization point is not 'Unit Started' but 'Job kicked off' for
the sequence. The sync point 'Unit Started' is need for real dependencies.

However, has anyone an opinion on the first idea? (increasing the priority)
I'd be interested in side effects caused by this modification.

Thx,

Marko
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] x bits set on /run/systemd/private, any particular reason?

2016-06-24 Thread Hoyer, Marko (ADITG/SW2)
Hi,

I'm not an expert on Linux access right management but I'm wondering why 
systemd's private socket (/run/systemd/private) has the x bits set. Did it 
happen accidently?

Can someone explain?

Best regards

Marko Hoyer
Advanced Driver Information Technology GmbH
Software Group II (ADITG/SW2)
Robert-Bosch-Str. 200
31139 Hildesheim
Germany
Tel. +49 5121 49 6948
Fax +49 5121 49 6999
mho...@de.adit-jv.com
ADIT is a joint venture company of Robert Bosch GmbH/Robert Bosch Car 
Multimedia GmbH and DENSO Corporation
Sitz: Hildesheim, Registergericht: Amtsgericht Hildesheim HRB 3438
Geschäftsführung: Wilhelm Grabow, Ken Yaguchi
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Call for a small extension - Passing the startup timeout to the processes as environment variable

2016-04-13 Thread Hoyer, Marko (ADITG/SW2)
Hi Hi,

I'm interested in a small extension around systemd passing a set of environment 
variables to processes executed (mainly what is happening in: 
build_environment(); execute.c)

What are we planning to do:

-  We are planning to have some functionality linked against 
applications started by systemd (in a shared library or so)

-  The purpose of the shared library is to debug out some information 
about the applications state and about the process in general in case

o   The watchdog is not served right in time

o   The READY=1 sd_notify signal is not sent right in time

-  The information must be dumped out before systemd kills the 
applications

To the realization:

-  The involved sd_notify calls are not directly called by the 
application, the application uses an abstraction provided by the shared 
library. The library delegates further respective calls.

o   So we are able to detect when an application serves the watchdog or sends 
the READY=1 signal

-  We need now a way to detect at about 90% of the elapsed timeout that 
the application is not reacting

-  For this, we need to know the watchdog timeout and the startup 
timeout set in the service file

o   The first one is passed as ENVIRONMENT variable by systemd to the processes

o   The second one is not

To the request:

-  Would it be possible to add some code lines to pass the startup 
timeout time as well as ENVIRONMENT variable?

Would be cool.

Thx in advance!

Best regards

Marko Hoyer
Software Group II (ADITG/SW2)

Tel. +49 5121 49 6948

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] x bits set on /run/systemd/private, any particular reason?

2016-06-27 Thread Hoyer, Marko (ADITG/SW2)
Hi,

Thx for the answer.

>> Either way, +x has no meaning on sockets (only +w matters).

I guess this was the fact I was actually interested in.

Best regards

Marko Hoyer
Software Group II (ADITG/SW2)

Tel. +49 5121 49 6948
From: Mantas Mikulėnas [mailto:graw...@gmail.com]
Sent: Freitag, 24. Juni 2016 18:31
To: Hoyer, Marko (ADITG/SW2)
Cc: systemd Mailing List
Subject: Re: [systemd-devel] x bits set on /run/systemd/private, any particular 
reason?

On Fri, Jun 24, 2016 at 2:24 PM, Hoyer, Marko (ADITG/SW2) 
<mho...@de.adit-jv.com<mailto:mho...@de.adit-jv.com>> wrote:
Hi,

I’m not an expert on Linux access right management but I’m wondering why 
systemd’s private socket (/run/systemd/private) has the x bits set. Did it 
happen accidently?

Immediately after bind(), the socket will have all permissions that weren't 
masked out by the current umask – there doesn't seem to be an equivalent to the 
mode parameter of open().

The default umask for init is 0; it seems that while systemd does set a more 
restrictive umask when necessary, it doesn't bother doing so when setting up 
the private socket, so it ends up having 0777 permissions by default...

Either way, +x has no meaning on sockets (only +w matters). Checking `find /run 
-type s -ls`, it seems services aren't very consistent whether to keep or 
remove it for their own sockets...

--
Mantas Mikulėnas <graw...@gmail.com<mailto:graw...@gmail.com>>
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Any reason why /run and /dev/shm do not have MS_NOEXEC flags set?

2017-02-01 Thread Hoyer, Marko (ADITG/SW2)
Hello,

a tiny question:
- Is there any reason why the mount points /run and /dev/shm do not have 
MS_NOEXEC flags set?

We like to remove execution capabilities from all volatile areas that are 
writeable to users for security reasons.

Best regards

Marko Hoyer
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Any reason why /run and /dev/shm do not have MS_NOEXEC flags set?

2017-02-01 Thread Hoyer, Marko (ADITG/SW2)
Hi,

thanks to all for your fast feedback. I'll kick off an internal discussion 
based on the facts you delivered to find out if our people actually want what 
they want ;)

Best regards

Marko Hoyer
Software Group II (ADITG/SW2)

Tel. +49 5121 49 6948
-Original Message-
From: systemd-devel [mailto:systemd-devel-boun...@lists.freedesktop.org] On 
Behalf Of Reindl Harald
Sent: Mittwoch, 1. Februar 2017 11:55
To: systemd-devel@lists.freedesktop.org
Subject: Re: [systemd-devel] Any reason why /run and /dev/shm do not have 
MS_NOEXEC flags set?



Am 01.02.2017 um 11:02 schrieb Hoyer, Marko (ADITG/SW2):
> a tiny question:
>
> - Is there any reason why the mount points /run and /dev/shm do not 
> have MS_NOEXEC flags set?
>
> We like to remove execution capabilities from all volatile areas that 
> are writeable to users for security reasons

it's all not that easy - see
https://bugzilla.redhat.com/show_bug.cgi?id=1398474 and
https://bugs.exim.org/show_bug.cgi?id=1749 and i am pretty sure other pieces 
would break on case of noexec SHM (yes i know that these bugreports are not 
about SHM, they are just a example)


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Single Start-job remains listed after startup in state waiting ...

2016-10-28 Thread Hoyer, Marko (ADITG/SW2)
Hello,

we are observing a weird  behavior with systemd 211.

The issue:

-After the startup is finished (multi-user.target is reached), one 
single job (typ: start, unit: service) remains in the job queue in state waiting

o   There seems not to be any unmet dependency

o   There are no units in state failed

o   The job is listed when calling systemctl list-jobs

-The issue is seen sporadically, not on every startup

-The startup is a mixture of

o   Units coming up as part of the dependency tree ending up in 
multi-user.target

o   And units started by a component using systemctl start

§  The started units might have a tree of depending units as well

§  The sub trees of several of such unit are not necessarily disjoint

What I'm doing currently:

-The issue is hard to reproduce.

-That's why I'm trying to find out in which cases it is possible at all 
that a job in state waiting can remain in the job list

-With a hypothesis, I can instrument systemd and try to reproduce it 
again

How you can help me:

-Maybe someone already knows such an issue. Maybe it has already been 
solved. Respective hints would be great.

-Some of you have probably a better knowledge about how transactions 
and the job scheduling are working in systemd. Maybe such a person could 
already point me to the cases that could happen to see finally such a result.

Thx in advance!

Best regards

Marko Hoyer
Advanced Driver Information Technology GmbH
Software Group II (ADITG/SW2)
Robert-Bosch-Str. 200
31139 Hildesheim
Germany
Tel. +49 5121 49 6948
Fax +49 5121 49 6999
mho...@de.adit-jv.com
ADIT is a joint venture company of Robert Bosch GmbH/Robert Bosch Car 
Multimedia GmbH and DENSO Corporation
Sitz: Hildesheim, Registergericht: Amtsgericht Hildesheim HRB 3438
Geschäftsführung: Wilhelm Grabow, Ken Yaguchi
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Single Start-job remains listed after startup in state waiting ...

2016-11-04 Thread Hoyer, Marko (ADITG/SW2)
Hi,

Thx for the reply.

> We had an issue with that in the past, please retry with a less ancient 
> version of systemd, I am pretty sure this was already fixed, but I can't tell 
> you really when precisely.

This is most probably not possible for the project in this state. I'll try to 
find the fix you are mentioning. 

Best regards

Marko Hoyer
Software Group II (ADITG/SW2)

Tel. +49 5121 49 6948
-Original Message-
From: Lennart Poettering [mailto:lenn...@poettering.net] 
Sent: Donnerstag, 3. November 2016 20:44
To: Hoyer, Marko (ADITG/SW2)
Cc: systemd Mailing List
Subject: Re: [systemd-devel] Single Start-job remains listed after startup in 
state waiting ...

On Fri, 28.10.16 14:55, Hoyer, Marko (ADITG/SW2) (mho...@de.adit-jv.com) wrote:

> Hello,
> 
> we are observing a weird  behavior with systemd 211.
> 
> The issue:
> 
> -After the startup is finished (multi-user.target is
> -reached), one single job (typ: start, unit: service)
> -remains in the job queue in state waiting

We had an issue with that in the past, please retry with a less ancient version 
of systemd, I am pretty sure this was already fixed, but I can't tell you 
really when precisely.

Lennart

--
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel