Re: [systemd-devel] Moving systemd-bootchart to a standalone repository

2016-02-19 Thread Alexander Sverdlin
Hello Daniel,

On 17/02/16 17:51, Daniel Mack wrote:
> As part of our spring cleaning, we've been thinking about giving
> systemd-bootchart a new home, in a new repository of its own. I've been
> working on this and put the result here:
> 
>   https://github.com/systemd/systemd-bootchart
> 
> This repository contains a stripped down set of the utility functions we
> have in src/shared and src/basic in systemd, with most of those which
> bootchart doesn't use removed. The man page and service file etc. are
> all in the new repository. A new GitHub team was created for maintainers
> of that code, and I'm willing to add more people to it, just let me
> know. Auke, you are the official maintainer of the thing, so I'd put you
> in that group anyway.
> 
> I have a local patch that removes the current sources from systemd, but
> before I put that into a pull request, I'd appreciate some feedback, so
> please let me know if the standalone version of that tool works for you.

I've cross-built it for ARM v7 without problems and it seems to work not
worse than before.

Regards,
Alex.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] md: Drop sending a change uevent when stopping

2016-02-19 Thread Shaohua Li
On Fri, Feb 19, 2016 at 10:06:20AM +1100, Neil Brown wrote:
> On Thu, Feb 18 2016, Hannes Reinecke wrote:
> 
> > -BEGIN PGP SIGNED MESSAGE-
> > Hash: SHA1
> >
> > On 02/17/2016 10:29 PM, NeilBrown wrote:
> >> On Thu, Feb 18 2016, Shaohua Li wrote:
> >> 
> >>> On Wed, Feb 17, 2016 at 05:25:00PM +0100, Sebastian
> >>> Parschauer wrote:
>  When stopping an MD device, then its device node /dev/mdX
>  may still exist afterwards or it is recreated by udev. The
>  next open() call can lead to creation of an inoperable MD
>  device. The reason for this is that a change event
>  (KOBJ_CHANGE) is sent to udev which races against the
>  remove event (KOBJ_REMOVE) from md_free(). So drop sending
>  the change event.
>  
>  A change is likely also required in mdadm as many versions
>  send the change event to udev as well.
> >>> 
> >>> Makes sense, it's unlikely we need the CHANGE event.
> >>> Applied.
> >>> 
> >>> Thanks, Shaohua
> >> 
> >> It would be worth checking, but I think that with this change,
> >> you can write "inactive" to /sys/block/mdXXX/md/array_state and
> >> the array will become inactive, but no uevent will be
> >> generated, which isn't good. Maybe send the uevent that was
> >> just removed from the 'inactive' case of array_state_store()
> >> instead.
> >> 
> >> (But I still think this is just a bandaid and doesn't provide
> >> any guarantees that there will be no races with udev)
> >> 
> > Thing is, _none_ of the other subsystems will ever send a uevent
> > when it becomes inactive.
> 
> A CDROM drive does when you eject the media.
> 
> 
> > (Would be pretty pointless, too, as what exactly is one supposed
> > to do here?)
> 
> Lazy-unmount the filesystem?
> If the array was part of another array, mark the slot in that array as
> 'faulty' ?
> 
> > The current usage has it that CHANGE events are only ever sent if
> > a device becomes active.
> 
> "mostly" but not "only ever".

Neil,

did you mean I should drop the patch?

I really doubt there is any difference with/without the CHANGE event giving a
REMOVE event will pop up soon. But this could be userspace aware, I'm not
totally sure.

Thanks,
Shaohua
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Bootchart speeding up boot time

2016-02-19 Thread Martin Townsend
Hi,

I'm new to systemd and have just enabled it for my Xilinx based dual core
cortex A-9 platform.  The linux system is built using Yocto (Fido branch)
which is using version 219 of systemd.

The main reason for moving over to systemd was to see if we could improve
boot times and the good news was that by just moving over to systemd we
halved the boot time.  So I read that I could analyse the boot times in
detail using bootchart so I set init=//bootchart in my kernel command
line and was really suprised to see my boot time halved again.  Thinking
some weird caching must have occurred on the first boot I reverted back to
normal systemd boot and boot time jumped back to normal (around 17/18
seconds), putting bootchart back in again reduced it to ~9/10 seconds.

So I created my own init using bootchart as a template that just slept for
20 seconds using nanosleep and this also had the same effect of speeding up
the boot time.

So the only difference I can see is that the kernel is not starting
/sbin/init -> /lib/systemd/systemd directly but via another program that is
performing a fork and then in the parent an execl to run
/lib/systemd/systemd.  What I would really like to understand is why it
runs faster when started this way?

I'm using glibc v2.21
Linux kernel v3.14
gcc v4.9.2


Let me know if you require any more information.

Any help appreciated,
Martin.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Regression in ipv6 resolutions in systemd-resolved with AF_UNSPEC

2016-02-19 Thread Sébastien Luttringer
Hello,

Since systemd v229, I have one server which no more resolve ipv6 adresses when
it use nss-resolve and AF_UNSPEC.

This issue seems to be linked with the DNS resolver used on its network. This
resolved is provided by a french FAI box (SFR).

I'm currently not able to understand precisely where is the issue, but opening
the socket with AF_UNSPEC does not resolve ipv6 and with AF_INET6 does.

I have the following nsswitch.conf
# grep hosts /etc/nsswitch.conf 
hosts: files resolve mymachines myhostname

With systemd v228 (precisely arch v228-4)
=
# systemctl --version 
systemd 228
+PAM -AUDIT -SELINUX -IMA -APPARMOR +SMACK -SYSVINIT +UTMP +LIBCRYPTSETUP
+GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN

# getent ahosts collectd.seblu.net
2001:bc8:3173:281:be5f:f4ff:fe84:d75e STREAM black.seblu.net
2001:bc8:3173:281:be5f:f4ff:fe84:d75e DGRAM  
2001:bc8:3173:281:be5f:f4ff:fe84:d75e RAW  

# ltrace getent ahosts collectd.seblu.net
...
getaddrinfo("collectd.seblu.net", nil, 0x7ffe7e8e7c70, 0x7ffe7e8e7c68)   = 0

# /usr/lib/systemd/systemd-resolve-host collectd.seblu.net
collectd.seblu.net: 2001:bc8:3173:281:be5f:f4ff:fe84:d75e
(black.seblu.net)

-- Information acquired via protocol DNS in 2.0ms.

With systemd v229 (precisely arch v229-2)
=

# systemctl --version
systemd 229
+PAM -AUDIT -SELINUX -IMA -APPARMOR +SMACK -SYSVINIT +UTMP +LIBCRYPTSETUP
+GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN

# getent ahosts collectd.seblu.net
# echo $?
2

# ltrace getent ahosts collectd.seblu.net 
...
getaddrinfo("collectd.seblu.net", nil, 0x7ffefda4d280, 0x7ffefda4d278)   = -2
+++ exited (status 2) +++

-2 is EAI_NONAME.

# systemd-resolve collectd.seblu.net 
collectd.seblu.net: resolve call failed: 'black.seblu.net' does not have any RR
of the requested type

Here is a debug enabled transaction in systemd-resolved. 
https://horus.seblu.net/~seblu/systemd/resolved_bug_collectd.txt

Fallback to nss-dns resolver allow resolution to work again
===

# systemctl stop systemd-resolved
# getent ahosts collectd.seblu.net
2001:bc8:3173:281:be5f:f4ff:fe84:d75e STREAM black.seblu.net
2001:bc8:3173:281:be5f:f4ff:fe84:d75e DGRAM  
2001:bc8:3173:281:be5f:f4ff:fe84:d75e RAW   

Force socket type to IF_INET6 make resolution to work
==
===
# getent ahostsv6 collectd.seblu.net
2001:bc8:3173:28
1:be5f:f4ff:fe84:d75e STREAM black.seblu.net
2001:bc8:3173:281:be5f:f4ff:fe84:d7
5e DGRAM  
2001:bc8:3173:281:be5f:f4ff:fe84:d75e RAW  

# systemd-resolve -6 collectd.seblu.net
collectd.seblu.net: 2001:bc8:3173:281:be5f:f4ff:fe84:d75e
(black.seblu.net)

-- Information acquired via protocol DNS in 2.0ms.
-- Data is authenticated: no


# python -c 'import socket; print(socket.getaddrinfo("collectd.seblu.net", 
None, socket.AF_UNSPEC))'
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/lib/python3.5/socket.py", line 732, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known


# python -c 'import socket; print(socket.getaddrinfo("collectd.seblu.net", 
None, socket.AF_INET6))' 
[(, , 6, '', 
('2001:bc8:3173:281:be5f:f4ff:fe84:d75e', 0, 0, 0)), (, , 17, '', 
('2001:bc8:3173:281:be5f:f4ff:fe84:d75e', 0, 0, 0)), (, , 0, '', ('2001:bc8:3173:281:be5f:f4ff:fe84:d75e', 
0, 0, 0))]


Cheers,

-- 
Sébastien "Seblu" Luttringer
https://seblu.net | Twitter: @seblu42
GPG: 0x2072D77A



signature.asc
Description: This is a digitally signed message part
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Support for large applications

2016-02-19 Thread Zbigniew Jędrzejewski-Szmek
On Fri, Feb 19, 2016 at 03:13:43PM +0100, Tomasz Torcz wrote:
> On Fri, Feb 19, 2016 at 12:49:53PM +, Zbigniew Jędrzejewski-Szmek wrote:
> > On Fri, Feb 19, 2016 at 01:42:12PM +0100, Michal Sekletar wrote:
> > > On Wed, Feb 17, 2016 at 1:35 PM, Avi Kivity  wrote:
> > > 
> > > > 3. watchdog during startup
> > > >
> > > > Sometimes we need to perform expensive operations during startup (log
> > > > replay, rebuild from network replica) before we can start serving. 
> > > > Rather
> > > > than configure a huge start timeout, I'd prefer to have the service 
> > > > report
> > > > progress to systemd so that it knows that startup is still in progress.
> > > >
> > > 
> > > Did you have a look at sd_notify (man 3 sd_notify)? Basically, you can
> > > easily patch your service to report status to systemd and tell to
> > > systemd exactly when it is ready to serve the clients. Thus you can
> > > avoid hacks like huge start timeout you've mentioned.
> > 
> > I don't think that helps, unless the service "lies" to systemd and
> > tells it has finished startup when it really hasn't (systemd would
> > ignore watchdog notifications during startup, and would do nothing if
> > they stopped coming, so the service has to tell systemd first that it
> > has started successfully, for the watchdog to be effective). Doing
> > that would fix this issue, but would have that systemd wouldn't know
> > that the service is still starting and would for example start
> > subsequent jobs.
> > 
> > I don't think there's a way around the issue short of allowing
> > watchdog during startup. Databases which do long recovery are a bit
> > special, most programs don't exhibit this kind of behaviour, but maybe
> > this case is important enough to add support for it.
> 
>   Maybe systemd could ignore watchdog notification during startup UNTIL
> first WATCHDOG=1 notification comes? Then normal watchdog logic would kick in.
>   This way we retain current logic (ignore watchdog during startup) unless
> application WANTS to tickle watchdog during startup.
>   Or is it too much magic?

I like it.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Support for large applications

2016-02-19 Thread Tomasz Torcz
On Fri, Feb 19, 2016 at 12:49:53PM +, Zbigniew Jędrzejewski-Szmek wrote:
> On Fri, Feb 19, 2016 at 01:42:12PM +0100, Michal Sekletar wrote:
> > On Wed, Feb 17, 2016 at 1:35 PM, Avi Kivity  wrote:
> > 
> > > 3. watchdog during startup
> > >
> > > Sometimes we need to perform expensive operations during startup (log
> > > replay, rebuild from network replica) before we can start serving. Rather
> > > than configure a huge start timeout, I'd prefer to have the service report
> > > progress to systemd so that it knows that startup is still in progress.
> > >
> > 
> > Did you have a look at sd_notify (man 3 sd_notify)? Basically, you can
> > easily patch your service to report status to systemd and tell to
> > systemd exactly when it is ready to serve the clients. Thus you can
> > avoid hacks like huge start timeout you've mentioned.
> 
> I don't think that helps, unless the service "lies" to systemd and
> tells it has finished startup when it really hasn't (systemd would
> ignore watchdog notifications during startup, and would do nothing if
> they stopped coming, so the service has to tell systemd first that it
> has started successfully, for the watchdog to be effective). Doing
> that would fix this issue, but would have that systemd wouldn't know
> that the service is still starting and would for example start
> subsequent jobs.
> 
> I don't think there's a way around the issue short of allowing
> watchdog during startup. Databases which do long recovery are a bit
> special, most programs don't exhibit this kind of behaviour, but maybe
> this case is important enough to add support for it.

  Maybe systemd could ignore watchdog notification during startup UNTIL
first WATCHDOG=1 notification comes? Then normal watchdog logic would kick in.
  This way we retain current logic (ignore watchdog during startup) unless
application WANTS to tickle watchdog during startup.
  Or is it too much magic?

-- 
Tomasz Torcz  ,,If you try to upissue this patchset I shall be 
seeking
xmpp: zdzich...@chrome.pl   an IP-routable hand grenade.'' -- Andrew Morton 
(LKML)

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Support for large applications

2016-02-19 Thread Michal Sekletar
On Fri, Feb 19, 2016 at 1:49 PM, Zbigniew Jędrzejewski-Szmek
 wrote:

> I don't think there's a way around the issue short of allowing
> watchdog during startup. Databases which do long recovery are a bit
> special, most programs don't exhibit this kind of behaviour, but maybe
> this case is important enough to add support for it.

Rather than configuring huge start timeout for the service, firing it
up and hoping for the best I was thinking about disabling start
timeout entirely, make service of type "notify" and signal service
readiness via sd_notify. Also update progress by sending STATUS
message when some significant stage of DB startup procedure finishes.
This is not solving watchdog problem, but I think it would be an
improvement anyway. At least, it would provide better debugability of
start-up of this particular service.

Michal
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Support for large applications

2016-02-19 Thread Zbigniew Jędrzejewski-Szmek
On Fri, Feb 19, 2016 at 01:42:12PM +0100, Michal Sekletar wrote:
> On Wed, Feb 17, 2016 at 1:35 PM, Avi Kivity  wrote:
> 
> > 3. watchdog during startup
> >
> > Sometimes we need to perform expensive operations during startup (log
> > replay, rebuild from network replica) before we can start serving. Rather
> > than configure a huge start timeout, I'd prefer to have the service report
> > progress to systemd so that it knows that startup is still in progress.
> >
> 
> Did you have a look at sd_notify (man 3 sd_notify)? Basically, you can
> easily patch your service to report status to systemd and tell to
> systemd exactly when it is ready to serve the clients. Thus you can
> avoid hacks like huge start timeout you've mentioned.

I don't think that helps, unless the service "lies" to systemd and
tells it has finished startup when it really hasn't (systemd would
ignore watchdog notifications during startup, and would do nothing if
they stopped coming, so the service has to tell systemd first that it
has started successfully, for the watchdog to be effective). Doing
that would fix this issue, but would have that systemd wouldn't know
that the service is still starting and would for example start
subsequent jobs.

I don't think there's a way around the issue short of allowing
watchdog during startup. Databases which do long recovery are a bit
special, most programs don't exhibit this kind of behaviour, but maybe
this case is important enough to add support for it.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Support for large applications

2016-02-19 Thread Michal Sekletar
On Wed, Feb 17, 2016 at 1:35 PM, Avi Kivity  wrote:

> 3. watchdog during startup
>
> Sometimes we need to perform expensive operations during startup (log
> replay, rebuild from network replica) before we can start serving. Rather
> than configure a huge start timeout, I'd prefer to have the service report
> progress to systemd so that it knows that startup is still in progress.
>

Did you have a look at sd_notify (man 3 sd_notify)? Basically, you can
easily patch your service to report status to systemd and tell to
systemd exactly when it is ready to serve the clients. Thus you can
avoid hacks like huge start timeout you've mentioned.

Michal
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel