Re: [systemd-devel] Help needed for optimizing my boot time

2015-07-16 Thread Harald Hoyer
On 11.06.2015 16:54, Francis Moreau wrote:
 On 06/11/2015 01:40 PM, Andrei Borzenkov wrote:
 On Thu, Jun 11, 2015 at 2:26 PM, Francis Moreau francis.m...@gmail.com 
 wrote:

$ systemd-analyze critical-chain

graphical.target @7.921s
  multi-user.target @7.921s
autofs.service @7.787s +132ms
  network-online.target @7.786s
network.target @7.786s
  NetworkManager.service @675ms +184ms
basic.target @674ms
  ...

 ...
 Is NetworkManager-wait-online.service enabled and active?


 It seems it's enabled but no more active:

 $ systemctl status NetworkManager-wait-online.service
 ● NetworkManager-wait-online.service - Network Manager Wait Online
Loaded: loaded
 (/usr/lib/systemd/system/NetworkManager-wait-online.service; disabled;
 vendor preset: disabled)
Active: inactive (dead) since Thu 2015-06-11 11:54:37 CEST; 1h 4min ago
   Process: 583 ExecStart=/usr/bin/nm-online -s -q --timeout=30
 (code=exited, status=0/SUCCESS)
  Main PID: 583 (code=exited, status=0/SUCCESS)

 Jun 11 11:54:30 cyclone systemd[1]: Starting Network Manager Wait Online...
 Jun 11 11:54:37 cyclone systemd[1]: Started Network Manager Wait Online.

 This seems correct to me, doesn't it ?


 Actually it says disabled which makes me wonder why it run. But this
 is the service that is likely responsible for long time you observe.
 If disabling it does ot help, you can try masking it (systemctl mask)
 for a test.

 
 Masking this service helps:
 
 $ systemd-analyze
 Startup finished in 3.323s (firmware) + 6.795s (loader) + 8.342s
 (kernel) + 1.470s (userspace) = 19.932s
 
 $ systemd-analyze critical-chain
 The time after the unit is active or started is printed after the @
 character.
 The time the unit takes to start is printed after the + character.
 
 graphical.target @1.470s
   multi-user.target @1.470s
 autofs.service @1.024s +445ms
   network-online.target @1.023s
 network.target @1.021s
   NetworkManager.service @731ms +289ms
basic.target @731ms
 
 and the system seems to run fine (specially autofs, ntpd).
 
 But I think the time given by systemd-analyze (1.470s) is not correct.
 When booting I can see that the userspace is doing a fsck on root which
 takes more than 2s. And the login screen takes at least 5s to appear
 once the fsck is starting.
 
 Is the time spent in initrd is included in userspace ?

Well, seems like systemd is not running in the initrd, so it's accounted to
kernel, which seems possible seeing 8.342s spent there.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Failed at step RUNTIME_DIRECTORY spawning /usr/bin/true: File exists

2015-07-16 Thread Reindl Harald

why does systemd *randomly* try to create RuntimeDirectory
for ExecStartPost and so terminates a perfectly up and running service 
because the file exists error?


Fedora 21 as well as Fedora 22
https://bugzilla.redhat.com/show_bug.cgi?id=1226509#c3

Jul 15 16:19:43 rawhide systemd: Stopping Test Unit...
Jul 15 16:19:43 rawhide systemd: Stopped Test Unit.
Jul 15 16:19:43 rawhide systemd: Starting Test Unit...
Jul 15 16:19:43 rawhide systemd: Failed at step RUNTIME_DIRECTORY 
spawning /usr/bin/true: File exists
Jul 15 16:19:43 rawhide systemd: test2.service: control process exited, 
code=exited status=233

Jul 15 16:19:43 rawhide systemd: Failed to start Test Unit.
Jul 15 16:19:43 rawhide systemd: Unit test2.service entered failed state.
Jul 15 16:19:43 rawhide systemd: test2.service failed.



signature.asc
Description: OpenPGP digital signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [RFC 0/6] A network proxy management daemon, systemd-proxy-discoveryd

2015-07-16 Thread David Woodhouse
On  Fri Apr 10 05:17:37 PDT 2015, Tomasz Bursztyka wrote:
 As it has been discussed in the systemd hackfest during the Linux Conference
 Europe, one daemon could centralize the management of all network proxy
 configurations. Idea is something user can do per-application (like in
 firefox for instance) or broader (per-DM like in Gnome), user could do it
 once and for all through such daemon and applications would then request it
 to know whether or not a proxy has to be used and which one.

What overriding reason is there for doing this instead of just using
PacRunner? If it's just the JS engine, that's *already* a plugin for
PacRunner and it's easy enough to add new options. Hell *I* managed to
add v8 support at some point in the dim and distant past, and I don't
even admit to knowing any C++. Or JavaScript.

You seem to be reimplementing the part of the solution that was already
basically *working*, while there are other parts which desperately need
to be completed.

PacRunner works. There is a libproxy plugin which uses it¹ — and
alternatively, PacRunner even provides its own drop-in replacement for
libproxy, implementing the nice simple API without the complete horror
the libproxy turned in to.

So we have the dæmon working, and we have a simple way for client
libraries and applications to use it.

The *only* thing that has so far held me back from proposing a
packaging guideline in my Linux distribution of choice which says
applications SHALL use libproxy or query PacRunner by default has
been the fact that NetworkManager does not pass on the proxy
information to PacRunner, so applications which query it won't yet get
the *right* results².

If you're going to look at this stuff, I wish you'd fix *that* (for
both NetworkManager and systemd-networkd) instead of polishing
something that already existed.

Then again, as long as it continues to work, I don't really care too
much. In some way, having it be part of systemd would at least give
more credence to the idea that applications SHOULD be using it by
default.

-- 
David WoodhouseOpen Source Technology Centre
david.woodho...@intel.com  Intel Corporation

¹ https://code.google.com/p/libproxy/issues/detail?id=152² 
https://bugzilla.gnome.org/show_bug.cgi?id=701824 and  
https://mail.gnome.org/archives/networkmanager-list/2013-June/msg00093.html

smime.p7s
Description: S/MIME cryptographic signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [RFC 6/6] update TODO

2015-07-16 Thread David Woodhouse
On Fri Apr 10 06:26:57 PDT 2015, Tom Gundersen wrote:
  On Fri, Apr 10, 2015 at 2:17 PM, Tomasz Bursztyka wrote:
  +   - support IPv6
 
 This we should probably have from the beginning, any reason you kept
 it IPv4 only for now (apart from keeping it simple as a POC)?

Wow, yeah. This far into the 21st century, you basically have to go out
of your *way* to misdesign things badly enough to make them work only
for Legacy IP and not IPv6. Do not simply note it in the TODO list.

You cannot retrofit sanity.

-- 
David WoodhouseOpen Source Technology Centre
david.woodho...@intel.com  Intel Corporation


smime.p7s
Description: S/MIME cryptographic signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [RFC 5/6] proxy-discoveryd: Add the basic parts for handling DBus methods

2015-07-16 Thread David Woodhouse
 
 +static int method_find_proxy(sd_bus *bus, sd_bus_message *message, 
 void *userdata, sd_bus_error *error) {
 +_cleanup_free_ char *p = strdup(DIRECT);
 +Manager *m = userdata;
 +int r;
 +
 +assert(bus);
 +assert(message);
 +assert(m);
 +
 +r = proxy_execute(m-gt;default_proxies, message);
 +if (r lt; 0)
 +sd_bus_reply_method_return(message, s, p);
 +
 +return 1;
 +}

That seems to be making no attempt to use the *correct* proxy
configuration according to the request.

In the case of things like split-tunnel VPNs, we want to handle it
basically the same way that we handle DNS.

Requests within the VPN's DNS domains, and the IP ranges which are
routed to the VPN, need to be resolved according to the VPN's proxy
configuration. And everything else needs to be resolved according to
the local proxy configuration.

NetworkManager already sets up dnsmasq to do precisely this for DNS.

-- 
David WoodhouseOpen Source Technology Centre
david.woodho...@intel.com  Intel Corporation


smime.p7s
Description: S/MIME cryptographic signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [RFC 5/6] proxy-discoveryd: Add the basic parts for handling DBus methods

2015-07-16 Thread Tomasz Bursztyka

Hi David,


+static int method_find_proxy(sd_bus *bus, sd_bus_message *message,
void *userdata, sd_bus_error *error) {
+_cleanup_free_ char *p = strdup(DIRECT);
+Manager *m = userdata;
+int r;
+
+assert(bus);
+assert(message);
+assert(m);
+
+r = proxy_execute(m-gt;default_proxies, message);
+if (r lt; 0)
+sd_bus_reply_method_return(message, s, p);
+
+return 1;
+}

That seems to be making no attempt to use the *correct* proxy
configuration according to the request.

In the case of things like split-tunnel VPNs, we want to handle it
basically the same way that we handle DNS.

Requests within the VPN's DNS domains, and the IP ranges which are
routed to the VPN, need to be resolved according to the VPN's proxy
configuration. And everything else needs to be resolved according to
the local proxy configuration.

NetworkManager already sets up dnsmasq to do precisely this for DNS.


This is all known. This thread was meant to be just an RFC so only the very
basics of the beginning of a proposal.

Tomasz
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] /var/lib/machines be a mounted subvolume rather than an actual subvolume?

2015-07-16 Thread Chris Murphy
Resurrection of related thread systemd and nested Btrfs subvolumes from March:
http://lists.freedesktop.org/archives/systemd-devel/2015-March/029666.html

The question:
I understand the argument for subvolumes for containers below
/var/lib/machines. I don't understand what feature(s) of Btrfs
subvolumes will be leveraged for /var/lib/machines itself, and why it
isn't just a regular directory that then contains whatever subvolumes
are needed?


The problem:
On openSUSE, it uses subvolumes at the top level of the file system
(in subvolid 5) to make certain things exempt from snapshotting and
rollback: like logs, mail, bootloader, and system settings. See the
fstab in the above URL to see the listing.

The fstab containing that long list of subvolumes to mount ensures
that those identical subvolumes are always used no matter what
subvol/snapshot the user rollsback to.

But there isn't an fstab entry for /var/lib/machines. So a.) it won't
get snapshot by snapper and b.) if a rollback is done, the backing
subvolume containing all the nspawn container subvolumes won't be
mounted, it will be empty.

The solution:
The implied fix for this is to create the subvolume
FS_TREE/var/lib/machines at installation time, and add it to fstab
to always mount at /var/lib/machines (a directory found in all
snapshots of /).
https://features.opensuse.org/319287

As a consequence, it means if there's a rollback, it's not possible to
delete /var/lib/machines - its contents can be deleted but a mounted
subvolume can't be.

Hence the question, rephrased, does systemd expect /var/lib/machines
to be an actual subvolume rather than a mountpoint backed by a mounted
subvolume?


-- 
Chris Murphy
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [RFC 6/6] update TODO

2015-07-16 Thread Tomasz Bursztyka

Hi David,


On Fri, Apr 10, 2015 at 2:17 PM, Tomasz Bursztyka wrote:
+   - support IPv6

This we should probably have from the beginning, any reason you kept
it IPv4 only for now (apart from keeping it simple as a POC)?

Wow, yeah. This far into the 21st century, you basically have to go out
of your *way* to misdesign things badly enough to make them work only
for Legacy IP and not IPv6. Do not simply note it in the TODO list.

You cannot retrofit sanity.


The answer was in Tom's question: this was just a bare POC.
This thread is 3 months old btw - and dead - where have you been?

Tomasz
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] UML: Fix block device setup

2015-07-16 Thread Thomas Meyer
diff --git a/rules/60-persistent-storage.rules 
b/rules/60-persistent-storage.rules
index 5ab03fc..0b14bb4 100644
--- a/rules/60-persistent-storage.rules
+++ b/rules/60-persistent-storage.rules
@@ -6,7 +6,7 @@
 ACTION==remove, GOTO=persistent_storage_end
 
 SUBSYSTEM!=block, GOTO=persistent_storage_end
-KERNEL!=loop*|mmcblk*[0-9]|msblk*[0-9]|mspblk*[0-9]|nvme*|sd*|sr*|vd*|xvd*|bcache*|cciss*|dasd*,
 GOTO=persistent_storage_end
+KERNEL!=loop*|mmcblk*[0-9]|msblk*[0-9]|mspblk*[0-9]|nvme*|sd*|sr*|vd*|xvd*|bcache*|cciss*|dasd*|ubd*,
 GOTO=persistent_storage_end
 
 # ignore partitions that span the entire disk
 TEST==whole_disk, GOTO=persistent_storage_end
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [RFC 0/6] A network proxy management daemon, systemd-proxy-discoveryd

2015-07-16 Thread Tomasz Bursztyka

Hi David,


On  Fri Apr 10 05:17:37 PDT 2015, Tomasz Bursztyka wrote:

As it has been discussed in the systemd hackfest during the Linux Conference
Europe, one daemon could centralize the management of all network proxy
configurations. Idea is something user can do per-application (like in
firefox for instance) or broader (per-DM like in Gnome), user could do it
once and for all through such daemon and applications would then request it
to know whether or not a proxy has to be used and which one.

What overriding reason is there for doing this instead of just using
PacRunner? If it's just the JS engine, that's *already* a plugin for
PacRunner and it's easy enough to add new options. Hell *I* managed to
add v8 support at some point in the dim and distant past, and I don't
even admit to knowing any C++. Or JavaScript.

You seem to be reimplementing the part of the solution that was already
basically *working*, while there are other parts which desperately need
to be completed.

PacRunner works. There is a libproxy plugin which uses it¹ — and
alternatively, PacRunner even provides its own drop-in replacement for
libproxy, implementing the nice simple API without the complete horror
the libproxy turned in to.

So we have the dæmon working, and we have a simple way for client
libraries and applications to use it.

The *only* thing that has so far held me back from proposing a
packaging guideline in my Linux distribution of choice which says
applications SHALL use libproxy or query PacRunner by default has
been the fact that NetworkManager does not pass on the proxy
information to PacRunner, so applications which query it won't yet get
the *right* results².

If you're going to look at this stuff, I wish you'd fix *that* (for
both NetworkManager and systemd-networkd) instead of polishing
something that already existed.

Then again, as long as it continues to work, I don't really care too
much. In some way, having it be part of systemd would at least give
more credence to the idea that applications SHOULD be using it by
default.



Ok maybe that's was unclear in my mail, but systemd guys want such feature
in systemd.  I know the work you did to integrate PACrunner properly.
And Marcel - who created PACrunner - was attending this hackfest as well.
So all parties involved know what are the reasons for that proposal.

Tomasz
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] UML: Fix block device setup

2015-07-16 Thread Tomasz Torcz
On Thu, Jul 16, 2015 at 08:22:03PM +0200, Thomas Meyer wrote:
 diff --git a/rules/60-persistent-storage.rules 
 b/rules/60-persistent-storage.rules
 index 5ab03fc..0b14bb4 100644
 --- a/rules/60-persistent-storage.rules
 +++ b/rules/60-persistent-storage.rules
 @@ -6,7 +6,7 @@
  ACTION==remove, GOTO=persistent_storage_end
  
  SUBSYSTEM!=block, GOTO=persistent_storage_end
 -KERNEL!=loop*|mmcblk*[0-9]|msblk*[0-9]|mspblk*[0-9]|nvme*|sd*|sr*|vd*|xvd*|bcache*|cciss*|dasd*,
  GOTO=persistent_storage_end
 +KERNEL!=loop*|mmcblk*[0-9]|msblk*[0-9]|mspblk*[0-9]|nvme*|sd*|sr*|vd*|xvd*|bcache*|cciss*|dasd*|ubd*,
  GOTO=persistent_storage_end

  This list is getting longer and longer… surely there could be a better way?

-- 
Tomasz TorczThere exists no separation between gods and men:
xmpp: zdzich...@chrome.pl   one blends softly casual into the other.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] How to debug this strange issue about systemd?

2015-07-16 Thread Andrei Borzenkov
В Wed, 15 Jul 2015 23:03:02 +0800
sean x...@suse.com пишет:

 Hi All:
   I am trying to test the latest upstream kernel, But i encounter a 
 strange issue about systemd.
 When the systemd extracted from initrd image mounts the real root file 
 system hda.img on /sysroot and changes root to the new directory,  it can 
 not found /sbin/init and /bin/sh.
 In fact, These two files exist in the hda.img.
 How to debug this issue?
 Why does not it enter emergency mode?
 If enter emergency mode, maybe this issue become easy.
 

You can stop in dracut just before switch root step and examine
environment. At this point root should already be mounted.

 qemu command line:
 qemu-kvm -D /tmp/qemu-kvm-machine.log -m 1024M -append 
 root=UUID=20059b62-2542-4a85-80cf-41da6e0c1137 rootflags=rw rootfstype=ext4 
 debug debug_objects console=ttyS0,115200n8 console=tty0
  rd.debug rd.shell=1 log_buf_len=1M systemd.unit=emergency.target 
 systemd.log_level=debug systemd.log_target=console -kernel 
 ./qemu_platform/bzImage -hda ./qemu_platform/hda.img 
 -initrd ./qemu_platform/initrd-4.1.0-rc2-7-desktop+ -device 
 e1000,netdev=network0 -netdev user,id=network0 -serial 
 file:/home/sean/work/source/upstream/kernel.org/ttys0.txt
 
...
 
 sean@linux-dunz:~/work/source/upstream/kernel.org/qemu_platform sudo mount 
 -o loop ./hda.img ./hda
 sean@linux-dunz:~/work/source/upstream/kernel.org/qemu_platform ls -l 
 ./hda/sbin/init
 lrwxrwxrwx 1 sean users 26 Jul 14 22:49 ./hda/sbin/init - 
 ../usr/lib/systemd/systemd

Do you have separate /usr?

 sean@linux-dunz:~/work/source/upstream/kernel.org/qemu_platform ls -l 
 ./hda/bin/sh
 lrwxrwxrwx 1 sean users 4 Oct 26  2014 ./hda/bin/sh - bash
 
 sean@linux-dunz:~/work/source/upstream/kernel.org/qemu_platform lsinitrd 
 ./initrd-4.1.0-rc2-7-desktop+ |grep sbin\/init
 -rwxr-xr-x   1 root root 1223 Nov 27  2014 sbin/initqueue
 lrwxrwxrwx   1 root root   26 Jul 14 21:00 sbin/init - 
 ../usr/lib/systemd/systemd
 sean@linux-dunz:~/work/source/upstream/kernel.org/qemu_platform lsinitrd 
 ./initrd-4.1.0-rc2-7-desktop+ |grep bin\/sh
 lrwxrwxrwx   1 root root4 Jul 14 21:00 bin/sh - bash
 
 
 

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Reason for setting runqueue to IDLE priority and side effects if this is changed?

2015-07-16 Thread Andrei Borzenkov
В Wed, 15 Jul 2015 17:30:53 +
Hoyer, Marko (ADITG/SW2) mho...@de.adit-jv.com пишет:

 Hi all,
 
 jumping from systemd 206 to systemd 211 we were faced with some issue, which 
 are finally caused by a changed main loop priority of the job execution. 
 
 Our use case is the following one:
 --
 While we are starting up the system, a so called application starter is 
 bringing up a set of applications at a certain point in a controlled way by 
 requesting systemd via dbus to start respective units. The reason is that we 
 have to take care for applications internal states as synchronization points 
 and for better meeting timing requirements of certain applications in a 
 generic way. I told the story once before in another post about watchdog 
 observation in state activating. However ...
 
 Up to v206, the behavior of systemd was the following one:
 --
 - the starter sends out a start request of a bench of applications (he 
 requests a sequence of unit starts)

If you want to control order of execution yourself, why do you not wait
for each unit to start before submitting next request?

 - it seems that systemd is working of the sequence exactly as it was 
 requested one by one in the same order
 
 Systemd v211 shows a different behavior:
 
 - the starter sends out a bench of requests
 - the job execution of the first job is significantly delayed (we have a 
 system under stress, high CPU load at that time)
 - suddenly, system starts working of the jobs but now in reverse order
 - depending on the situation, it might happen that a complete bench of 
 scheduled jobs are reverse ordered, sometimes two or more sub benches of jobs 
 are executed in the reverse order (the jobs in each bench are reverse ordered)
 
 I found that the system behavior with systemd v206 was only accidently the 
 expected one. The reason was that in this version the run queue dispatching 
 was a fixed part of the main loop located before the dispatching of the 
 events. This way, it gained higher priority than the dbus request handling. 
 One jobs was requested via dbus. Once the dbus job request was dispatched, 
 it's been worked off immediately the next round of the mainloop. Then the 
 next dbus request was dispatched and so on ...
 
 Systemd v211 added the running queue as deferred event source to the event 
 handling with priority IDLE. So dbus requests are preferred over job 
 execution. The reverse order effect is simply because the run queue is more a 
 stack than a queue. All of the observed behavior could be explained I guess.
 
 So long story in advance. I've now two questions:
 - Am I causing any critical side effects when I'm increasing the run queue 
 priority so that it is higher than the one of the dbus handling (which is 
 NORMAL)? First tests showed that I can get back exactly the behavior we had 
 before with that.
 
 - Might it still happen that situations are happening where the jobs are 
 reordered even though I'm increasing the priority?
 
 - Is there any other good solution ensuring the order of job execution?
 

systemd never promised anything about relative order of execution
unless there is explicit dependency between units. So good solution is
to put dependency in unit definitions. submit the whole bunch at once
and let systemd sort the order.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel