Re: [systemd-devel] Bonding wireless and wired internet connection makes my internet really slow

2018-05-04 Thread Kai Krakow
Am Thu, 03 May 2018 14:34:34 +0300 schrieb Doron Behar:

> Hi everyone,
> 
> I'm trying to bond my wireless and wired network interfaces using
> `systemd-networkd`. Since I didn't want to put any configuration in
> `/etc/systemd/network/` I don't fully understand, this is the
> configuration I've ended up with:
> 
>   $ cat /etc/systemd/network/10-bond0.netdev
>   [NetDev]
>   Name=bond0
>   Kind=bond
> 
> ---
> 
>   $ cat /etc/systemd/network/20-wired.network
>   [Match]
>   Name=enp0s25
>   
>   [Network]
>   Bond=bond0
> 
> ---
> 
>   $ cat /etc/systemd/network/25-wireless.network
>   [Match]
>   Name=wlp2s0
>   
>   [Network]
>   Bond=bond0
> 
> ---
> 
>   $ cat /etc/systemd/network/35-tethering.network
>   [Match]
>   Name=enp0s20u*
>   
>   [Network]
>   Bond=bond0
> 
> ---
> 
>   $ cat /etc/systemd/network/40-bond0.network
>   [Match]
>   Name=bond0
>   
>   [Network]
>   DHCP=yes
> 
> ---
> 
> As opposed to the old configuration which works as fast as light:
> 
>   $ cat /etc/systemd/network/20-wired.network
>   [Match]
>   Name=enp0s25
>   
>   [Network]
>   DHCP=yes
> 
> ---
> 
>   $ cat /etc/systemd/network/25-wireless.network
>   [Match]
>   Name=wlp2s0
>   
>   [Network]
>   DHCP=yes
> 
> ---
> 
>   $ cat /etc/systemd/network/35-tethering.network
>   [Match]
>   Name=enp0s20u*
>   
>   [Network]
>   DHCP=yes
> 
> ---
> 
> With the 1st configuration, If I reboot, I definitely have networking
> capabilities but from some reason, after a while, browsing the web for
> example, becomes extremely slow.
> 
> How can I debug this? Are there any best practices for the simplest
> bonding configuration? What am I missing here?

Do the wifi and cable network connect to the same router? Then it
probably doesn't know about the bonding and network packets start to
travel in circles after a while.

You may be able to fix this by setting the bond mode to failover
instead of balancing. I think the mode is called "active-backup". But
usually the other end of the bond needs to know about this, too.

Another mode which may work is "balance-xor" which usually allows
attaching bonding members to different switches on the other side.

The mode defaults to balance-rr which is probably incompatible with
your setup.

BTW: Bonding may not improve throughput and even hurt latency a lot if
you combine very different types on links, i.e. wifi and cable. So
"active-backup" may be the best way to do it here.


-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-journald may crash during memory pressure

2018-02-10 Thread Kai Krakow
Am Sat, 10 Feb 2018 09:39:35 -0800 schrieb vcaputo:

>> After some more research, I found that vm.watermark_scale_factor may be
>> the knob I am looking for. I'm going to watch behavior now with a
>> higher factor (default = 10, now 200).
>> 
>> 
> Have you reporteed this to the kernel maintainers?  LKML?

No, not yet. I think they are aware of the issues as there's still on-
going work to memory allocations within kernel threads, and there's 
perceivable improvement with every new kernel version. Especially, btrfs 
has seen a few patches in this area.


> While this is interesting to read on systemd-devel, it's not right
> venue.  What you describe sounds like a regression that probably should
> be improved upon.

I know it's mostly off-topic. But the problem is most visible in systemd-
journald and I think there are some users here which may have a better 
understanding of the underlying problem, or maybe even found solutions to 
it.

One approach for me was using systemd specific slices. So it may be 
interesting to other people.


> Also, out of curiosity, are you running dmcrypt in this scenario?  If
> so, is swap on dmcrypt as well?

No, actually not. I'm using bcache for rootfs which may have similar 
implications to memory allocations. Swap is just plain swap distributed 
across 4 disks.

If I understand correctly, dmcrypt may expose this problem further 
because it needs to "double buffer" memory while passing it further down 
the storage layer.

I had zswap enabled previously which may expose this problem, too. I now 
disabled it and later enabled THP again. THP now runs very well again. 
Looks like zswap and THP don't play well together. OTOH, these options 
were switched on and off during different kernel versions. So it may also 
be an effect of fixes in newer kernels.


-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-journald may crash during memory pressure

2018-02-10 Thread Kai Krakow
Am Sat, 10 Feb 2018 14:23:34 +0100 schrieb Kai Krakow:

> Am Sat, 10 Feb 2018 02:16:44 +0200 schrieb Uoti Urpala:
> 
>> On Fri, 2018-02-09 at 12:41 +0100, Lennart Poettering wrote:
>>> This last log lines indicates journald wasn't scheduled for a long
>>> time which caused the watchdog to hit and journald was aborted.
>>> Consider increasing the watchdog timeout if your system is indeed that
>>> loaded and that's is supposed to be an OK thing...
>> 
>> BTW I've seen the same behavior on a system with a single active
>> process that uses enough memory to trigger significant swap use. I
>> wonder if there has been a regression in the kernel causing misbehavior
>> when swapping? The problems aren't specific to journald - desktop
>> environment can totally freeze too etc.
> 
> This problem seems to be there since kernel 4.9 which was a real pita in
> this regard. It's progressively becoming better since kernel 4.10. The
> kernel seems trying to prevent swapping at any cost since then, at least
> at the cost of much higher latency, and at the cost of pushing all cache
> out of RAM.
> 
> The result is processes stuck for easily 30 seconds and more during
> memory pressure. Sometimes I see the kernel loudly complaining in dmesg
> about high wait times for allocating RAM, especially from the btrfs
> module. Thus, the biggest problem may be that kernel threads itself get
> stuck in memory allocations and are a victim of high latency.
> 
> Currently I'm running my user session in a slice with max 80% RAM which
> seems to help. It helps not discarding all cache. I also put some
> potentially high memory users (regarding cache and/or resident mem) into
> slices with carefully selected memory limits (backup and maintenance
> services). Slices limited in such a way will start swapping before cache
> is discarded and everything works better again. Part of this problem may
> be that I have one process running which mmaps and locks 1G of memory
> (bees, a btrfs deduplicator).
> 
> This system has 16G of RAM which is usually plenty but I use tmpfs to
> build packages in Gentoo, and while that worked wonderfully before 4.9,
> I have to be really careful now. The kernel happily throws away cache
> instead of swapping early. Setting vm.swappiness differently seems to
> have no perceivable effect.
> 
> Software that uses mmap is the first latency victim of this new
> behavior.
> As such, also systemd-journald seems to be hit hard by this.
> 
> After the system recovered from high memory pressure (which can take
> 10-15 minutes, resulting in a loadavg of 400+), it ends up with some
> gigabytes of inactive memory in the swap which it will only swap back in
> then during shutdown (which will also take some minutes then).
> 
> The problem since 4.9 seems to be that the kernel tends to do swap
> storms instead of constantly swapping out memory at low rates during
> usage. The swap storms totally thrash the system.
> 
> Before 4.9, the kernel had no such latency spikes under memory pressure.
> Swap would usually grew slowly over time, and the system felt sluggish
> one or another time but still usable wrt latency. I usually ended up
> with 5-8G of swap usage, and that was no problem. Now, swap only
> significantly grows during swap storms with an unusable system for many
> minutes, with latencies of 10+ seconds around twice per minute.
> 
> I had no swap storm yet since the last boot, and swap usage is around
> 16M now. Before kernel 4.9, this would be much higher already.

After some more research, I found that vm.watermark_scale_factor may be 
the knob I am looking for. I'm going to watch behavior now with a higher 
factor (default = 10, now 200).


-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-journald may crash during memory pressure

2018-02-10 Thread Kai Krakow
Am Sat, 10 Feb 2018 02:16:44 +0200 schrieb Uoti Urpala:

> On Fri, 2018-02-09 at 12:41 +0100, Lennart Poettering wrote:
>> This last log lines indicates journald wasn't scheduled for a long
>> time which caused the watchdog to hit and journald was
>> aborted. Consider increasing the watchdog timeout if your system is
>> indeed that loaded and that's is supposed to be an OK thing...
> 
> BTW I've seen the same behavior on a system with a single active
> process that uses enough memory to trigger significant swap use. I
> wonder if there has been a regression in the kernel causing misbehavior
> when swapping? The problems aren't specific to journald - desktop
> environment can totally freeze too etc.

This problem seems to be there since kernel 4.9 which was a real pita in 
this regard. It's progressively becoming better since kernel 4.10. The 
kernel seems trying to prevent swapping at any cost since then, at least 
at the cost of much higher latency, and at the cost of pushing all cache 
out of RAM.

The result is processes stuck for easily 30 seconds and more during 
memory pressure. Sometimes I see the kernel loudly complaining in dmesg 
about high wait times for allocating RAM, especially from the btrfs 
module. Thus, the biggest problem may be that kernel threads itself get 
stuck in memory allocations and are a victim of high latency.

Currently I'm running my user session in a slice with max 80% RAM which 
seems to help. It helps not discarding all cache. I also put some 
potentially high memory users (regarding cache and/or resident mem) into 
slices with carefully selected memory limits (backup and maintenance 
services). Slices limited in such a way will start swapping before cache 
is discarded and everything works better again. Part of this problem may 
be that I have one process running which mmaps and locks 1G of memory 
(bees, a btrfs deduplicator).

This system has 16G of RAM which is usually plenty but I use tmpfs to 
build packages in Gentoo, and while that worked wonderfully before 4.9, I 
have to be really careful now. The kernel happily throws away cache 
instead of swapping early. Setting vm.swappiness differently seems to 
have no perceivable effect.

Software that uses mmap is the first latency victim of this new behavior. 
As such, also systemd-journald seems to be hit hard by this.

After the system recovered from high memory pressure (which can take 
10-15 minutes, resulting in a loadavg of 400+), it ends up with some 
gigabytes of inactive memory in the swap which it will only swap back in 
then during shutdown (which will also take some minutes then).

The problem since 4.9 seems to be that the kernel tends to do swap storms 
instead of constantly swapping out memory at low rates during usage. The 
swap storms totally thrash the system.

Before 4.9, the kernel had no such latency spikes under memory pressure. 
Swap would usually grew slowly over time, and the system felt sluggish 
one or another time but still usable wrt latency. I usually ended up with 
5-8G of swap usage, and that was no problem. Now, swap only significantly 
grows during swap storms with an unusable system for many minutes, with 
latencies of 10+ seconds around twice per minute.

I had no swap storm yet since the last boot, and swap usage is around 16M 
now. Before kernel 4.9, this would be much higher already.


-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-journald may crash during memory pressure

2018-02-09 Thread Kai Krakow
Am Thu, 08 Feb 2018 18:12:23 -0800 schrieb vcaputo:

> Note the logs you've pasted portray a watchdog timeout which resulted in
> SIGABRT and a subsequent core dump.
> 
> This is not really a journald "crash", and you can increase the watchdog
> timeout or disable it entirely to make it more tolerant of thrashing.
> 
> What I presume happened is the system was thrashing and a page fault in
> the mapped journal took too long to complete.

Oh thanks, this is a good pointer. I'll try that.


> On Thu, Feb 08, 2018 at 11:50:45PM +0100, Kai Krakow wrote:
>> Hello!
>> 
>> During memory pressure and/or high load, journald may crash. This is
>> probably due to design using mmap but it should really not do this.
>> 
>> On 32-bit systems, we are seeing such crashes constantly although the
>> available memory is still gigabytes (it's a 32-bit userland running in a
>> 64-bit kernel).
>> 
>> 
>> [82988.670323] systemd[1]: systemd-journald.service: Main process exited, 
>> code=dumped, status=6/ABRT
>> [82988.670684] systemd[1]: systemd-journald.service: Failed with result 
>> 'watchdog'.
>> [82988.685928] systemd[1]: systemd-journald.service: Service has no hold-off 
>> time, scheduling restart.
>> [82988.709575] systemd[1]: systemd-journald.service: Scheduled restart job, 
>> restart counter is at 2.
>> [82988.717390] systemd[1]: Stopped Flush Journal to Persistent Storage.
>> [82988.717411] systemd[1]: Stopping Flush Journal to Persistent Storage...
>> [82988.726303] systemd[1]: Stopped Journal Service.
>> [82988.844462] systemd[1]: Starting Journal Service...
>> [82993.633781] systemd-coredump[22420]: MESSAGE=Process 461 
>> (systemd-journal) of user 0 dumped core.
>> [82993.633811] systemd-coredump[22420]: Coredump diverted to 
>> /var/lib/systemd/coredump/core.systemd-journal.0.3d492c866f254fb981f916c6c3918046.461.151812537700.lz4
>> [82993.633813] systemd-coredump[22420]: Stack trace of thread 461:
>> [82993.633814] systemd-coredump[22420]: #0  0x7f940241d4dd 
>> journal_file_move_to_object (libsystemd-shared-237.so)
>> [82993.633815] systemd-coredump[22420]: #1  0x7f940241e910 
>> journal_file_find_data_object_with_hash (libsystemd-shared-237.so)
>> [82993.633816] systemd-coredump[22420]: #2  0x7f940241fe81 
>> journal_file_append_data (libsystemd-shared-237.so)
>> [82993.633817] systemd-coredump[22420]: #3  0x556a343ae9ea 
>> write_to_journal (systemd-journald)
>> [82993.633819] systemd-coredump[22420]: #4  0x556a343b0974 
>> server_dispatch_message (systemd-journald)
>> [82993.633820] systemd-coredump[22420]: #5  0x556a343b24bb 
>> stdout_stream_log (systemd-journald)
>> [82993.633821] systemd-coredump[22420]: #6  0x556a343b2afe 
>> stdout_stream_line (systemd-journald)
>> [82993.723157] systemd-coredum: 7 output lines suppressed due to ratelimiting
>> [83002.830610] systemd-journald[22424]: File 
>> /var/log/journal/121b87ca633e8ac001665668001b/system.journal corrupted 
>> or uncleanly shut down, renaming and replacing.
>> [83014.774538] systemd[1]: Started Journal Service.
>> [83119.277143] systemd-journald[22424]: File 
>> /var/log/journal/121b87ca633e8ac001665668001b/user-500.journal corrupted 
>> or uncleanly shut down, renaming and replacing.
>> 
>> 
>> -- 
>> Regards,
>> Kai
>> 
>> Replies to list-only preferred.
>> 
>> ___
>> systemd-devel mailing list
>> systemd-devel@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/systemd-devel
> ___
> systemd-devel mailing list
> systemd-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/systemd-devel





-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] systemd-journald may crash during memory pressure

2018-02-08 Thread Kai Krakow
Hello!

During memory pressure and/or high load, journald may crash. This is
probably due to design using mmap but it should really not do this.

On 32-bit systems, we are seeing such crashes constantly although the
available memory is still gigabytes (it's a 32-bit userland running in a
64-bit kernel).


[82988.670323] systemd[1]: systemd-journald.service: Main process exited, 
code=dumped, status=6/ABRT
[82988.670684] systemd[1]: systemd-journald.service: Failed with result 
'watchdog'.
[82988.685928] systemd[1]: systemd-journald.service: Service has no hold-off 
time, scheduling restart.
[82988.709575] systemd[1]: systemd-journald.service: Scheduled restart job, 
restart counter is at 2.
[82988.717390] systemd[1]: Stopped Flush Journal to Persistent Storage.
[82988.717411] systemd[1]: Stopping Flush Journal to Persistent Storage...
[82988.726303] systemd[1]: Stopped Journal Service.
[82988.844462] systemd[1]: Starting Journal Service...
[82993.633781] systemd-coredump[22420]: MESSAGE=Process 461 (systemd-journal) 
of user 0 dumped core.
[82993.633811] systemd-coredump[22420]: Coredump diverted to 
/var/lib/systemd/coredump/core.systemd-journal.0.3d492c866f254fb981f916c6c3918046.461.151812537700.lz4
[82993.633813] systemd-coredump[22420]: Stack trace of thread 461:
[82993.633814] systemd-coredump[22420]: #0  0x7f940241d4dd 
journal_file_move_to_object (libsystemd-shared-237.so)
[82993.633815] systemd-coredump[22420]: #1  0x7f940241e910 
journal_file_find_data_object_with_hash (libsystemd-shared-237.so)
[82993.633816] systemd-coredump[22420]: #2  0x7f940241fe81 
journal_file_append_data (libsystemd-shared-237.so)
[82993.633817] systemd-coredump[22420]: #3  0x556a343ae9ea write_to_journal 
(systemd-journald)
[82993.633819] systemd-coredump[22420]: #4  0x556a343b0974 
server_dispatch_message (systemd-journald)
[82993.633820] systemd-coredump[22420]: #5  0x556a343b24bb 
stdout_stream_log (systemd-journald)
[82993.633821] systemd-coredump[22420]: #6  0x556a343b2afe 
stdout_stream_line (systemd-journald)
[82993.723157] systemd-coredum: 7 output lines suppressed due to ratelimiting
[83002.830610] systemd-journald[22424]: File 
/var/log/journal/121b87ca633e8ac001665668001b/system.journal corrupted or 
uncleanly shut down, renaming and replacing.
[83014.774538] systemd[1]: Started Journal Service.
[83119.277143] systemd-journald[22424]: File 
/var/log/journal/121b87ca633e8ac001665668001b/user-500.journal corrupted or 
uncleanly shut down, renaming and replacing.


-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemctl --user user can't restart their own service?

2017-11-17 Thread Kai Krakow
Am Fri, 17 Nov 2017 10:00:35 -0800
schrieb Jeff Solomon :

> Hi,
> 
> Is it by-design that a user can't restart their own user service?
> 
> I have worked around this by doing the following:
> 
> Override /lib/systemd/system/user@.service with a new file:
> 
> /etc/systemd/system/user@.service
> 
> I could have left out the  if I wanted the override to apply to
> all users, but in this case, I want it to apply to only a single user.
> 
> In user@.service, I added:
> 
> Restart=always
> 
> to the [Service] section.
> 
> Viola! Now the user can just kill their own service (since they own it
> after all) and systemd will restart it for them.
> 
> Any problem with this workaround Lennart?

Services in systemd/system are NOT user services. It's a system service
running with user privileges. I think the whole point of exactly that
instance is starting the user systemd instance.

Real user services belong into systemd/user folder (provided by the
admin or system, or by the user when below $HOME/.config).

With "systemctl --user" you manage exactly those services.

What's the point of letting the user restart the user@.service anyway?
It would probably kill her/his login session or break other things. You
should restart individual user services instead. You can list them with
"systemctl --user status".

I have the feeling you didn't understand what the user@.service really
is... It's in any case not "the user's own service". It's their systemd
instance. It is to your user login what is init to the OS. You wouldn't
restart init, would you? I probably understand that it'd be equal to
rebooting the system. Thus, in the user instance case, it would restart
the complete session of the user.

You have to understand the difference: You could also create a service
like dropbox@.service. But it's not a user service then. It's a
system service instance running with user privileges. It's in that case
decoupled from the user session and would run without having the user
to login. You can achieve something similar with enabling linger on user
sessions. The difference is: First case runs outside of the user
session context, the latter runs within the context. This has a direct
effect on how the cgroups apply.


-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] MemoryLimit for user unit

2017-11-12 Thread Kai Krakow
Am Sun, 12 Nov 2017 18:14:38 +0100
schrieb Stefan Schweter :

> Hi systemd-users,
> 
> I tried to add a memory limit for a user service unit (inspired by
> [1]), it looks like:
> 
> [Service]
> # 
> MemoryAccounting=true
> MemoryLimit=1G
> 
> Now the problem is that the (user) service consumes more than 1G
> without being terminated.

As far as I could see, this limits the amount of RAM occupied. It
doesn't stop the memory from being swapped out. You need to limit swap
memory, too. Take note that swap accounting may have noticeable
overheads and as such is not enabled by default on many systems.


> htop shows a memory consumption of 1.4 GB. The output of
> `systemd-cgtop` is:
> 
> Control Group Tasks   %CPU   Memory
> Input/s Output/s
> / -1.5
> 1.7G-
> 
> /user.slice  460.4
> 14.3M-
> 
> /user.slice/user-1001.slice  460.4
> 14.2M-
> 
> /init.scope   1  -
> 1.4M-
> 
> /system.slice
> 
> 
> So my question is how would MemoryLimit= work for a user unit?

Maybe you want to apply the limit to a slice? Your output of cgtop
doesn't show any service units...


-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] How to generate core file in system service

2017-09-26 Thread Kai Krakow
Am Mon, 25 Sep 2017 10:26:48 +0200
schrieb Miroslav Suchý :

> Dne 25.9.2017 v 08:47 Mantas Mikulėnas napsal(a):
> > But when I start the deamon by "teamd" directly, I could get core
> > file. When I start it by systemctl start teamd@team0.service, no
> > core file was generate when it crashed, but only get:
> >    "systemd: teamd@team0.service: main process exited, code=killed,
> > status=6/ABRT"
> > in /var/log/messages.  
> 
> I guess this crash has been caught by ABRT.
> 
> Try:
>   abrt-cli list

Also try "coredumpctl list". Maybe systemd-coredump did catch the dump.

This is the case if sysctl kernel.core_pattern pipes to
/lib/systemd/systemd-coredump.


-- 
Regards,
Kai

Replies to list-only preferred.


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Dedup timers?

2017-09-26 Thread Kai Krakow
Am Mon, 25 Sep 2017 22:45:23 -0700
schrieb Daniel Wang :

> I have a number of timers that all look something like the following:
> 
> cat /etc/systemd/system/foo.timer
> [Unit]
> Description=Run foo every hour
> 
> [Timer]
> OnCalendar=hourly
> 
> cat /etc/systemd/system/bar.timer
> [Unit]
> Description=Run bar every minute
> 
> [Timer]
> OnCalendar=minutely
> 
> ... (this list goes on and on)
> 
> The boilerplate for such small things is killing me. Is there a good
> technique to replace them with something simpler?
> Maybe transient timers? What will be the drawbacks of transient timers
> comparing to regular timers?

Make them into targets:

$ systemctl cat timer-{daily,hourly}.{target,timer}
# /etc/systemd/system/timer-daily.target
[Unit]
Description=Daily Timer Target
StopWhenUnneeded=yes

# /etc/systemd/system/timer-daily.timer
[Unit]
Description=Daily Timer

[Timer]
OnBootSec=10min
OnUnitActiveSec=1d
Unit=timer-daily.target
AccuracySec=12h
Persistent=yes

[Install]
WantedBy=timers.target

# /etc/systemd/system/timer-hourly.target
[Unit]
Description=Hourly Timer Target
StopWhenUnneeded=yes

# /etc/systemd/system/timer-hourly.timer
[Unit]
Description=Hourly Timer

[Timer]
OnBootSec=5min
OnUnitActiveSec=1h
Unit=timer-hourly.target
AccuracySec=30min
Persistent=yes

[Install]
WantedBy=timers.target



Then install your services into the target:

$ systemctl cat porticron.service
# /etc/systemd/system/porticron.service
[Unit]
Description=Check for upgrades and security updates

[Service]
Type=oneshot
IOSchedulingClass=idle
IOSchedulingPriority=7
CPUSchedulingPolicy=batch
Nice=7
ExecStart=/usr/sbin/porticron

[Install]
WantedBy=timer-daily.target


Enable timer targets:

$ systemctl enable timer-{daily,hourly}.timer
Created symlink /etc/systemd/system/timers.target.wants/timer-daily.timer → 
/etc/systemd/system/timer-daily.timer.
Created symlink /etc/systemd/system/timers.target.wants/timer-hourly.timer → 
/etc/systemd/system/timer-hourly.timer.


There's no need to enable the services then as they are triggered by
the timer targets:

$ systemctl list-timers
NEXT  LEFTLAST  
PASSEDUNIT ACTIVATES
Tue 2017-09-26 22:07:50 CEST  58min left  Tue 2017-09-26 21:07:50 CEST  
1min 29s ago  timer-hourly.timer   timer-hourly.target
Wed 2017-09-27 01:22:26 CEST  4h 13min left   Tue 2017-09-26 01:16:59 CEST  
19h ago   timer-daily.timertimer-daily.target


As the targets are stopped when unneeded (thus each triggered service
stopped), they will be fired again next time. Just make sure your
triggered services actually stop: They should be Type=oneshot.


-- 
Regards,
Kai

Replies to list-only preferred.


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Github systemd issue 6237

2017-07-08 Thread Kai Krakow
Am Sat, 8 Jul 2017 08:05:44 +0200
schrieb Kai Krakow <hurikha...@gmail.com>:

> Am Sat, 8 Jul 2017 11:39:02 +1000 (AEST)
> schrieb Michael Chapman <m...@very.puzzling.org>:
> 
> > On Sat, 8 Jul 2017, Kai Krakow wrote:
> > [...]  
> > > The bug here is that a leading number will "convert" to the number
> > > and it actually runs with the UID specified that way: 0day = 0,
> > > 7days = 7.
> > 
> > No, this is not the case. Only all-digit User= values are treated
> > as  
> 
> Then behavior is "correct".

Or in other words: The original bug description is wrong. The bug isn't
with non-existent users. That works fine.


-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Github systemd issue 6237

2017-07-08 Thread Kai Krakow
Am Sat, 8 Jul 2017 11:39:02 +1000 (AEST)
schrieb Michael Chapman <m...@very.puzzling.org>:

> On Sat, 8 Jul 2017, Kai Krakow wrote:
> [...]
> > The bug here is that a leading number will "convert" to the number
> > and it actually runs with the UID specified that way: 0day = 0,
> > 7days = 7.  
> 
> No, this is not the case. Only all-digit User= values are treated as

Then behavior is "correct".


-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Github systemd issue 6237

2017-07-07 Thread Kai Krakow
Am Tue, 4 Jul 2017 21:23:01 + (UTC)
schrieb Alexander Bisogiannis :

> On Tue, 04 Jul 2017 17:21:01 +, Zbigniew Jędrzejewski-Szmek wrote:
> 
> > If you need root permissions to create a unit, then it's not a
> > security issue. An annoyance at most.  
> 
> The fact that you need to be root to create a unit file is irrelevant.
> 
> Systemd is running a service as a different user to what is defined
> in the unit file. 
> This is a bug and a local security issue, especially because it will
> run said service as root.
> 
> It might not warrant a CVE, although in my line of work this is 
> considered a security issue, but it is a bug and needs fixing.
> 
> The fix is to refuse to run the service, period.

There's nothing to fix because it already works that way: If you give
it a valid user name that does not exists, the system refuses to start
the unit with "user not found".

If you give it an invalid user name (leading digits, disallowed
characters), then it complains with a warning and continues to run as
if you specified no user (thus it runs as root).

The bug here is that a leading number will "convert" to the number and
it actually runs with the UID specified that way: 0day = 0, 7days = 7.
But this is not really a security concern as only root can create units
that contain a user - except you open exploits for that: But then you
have other problems then that.

Conclusion: Not a security issue. If you trick an admin into accepting
unit files without validating the contents, you are having other issues
than an issue with systemd.


> Is there any other place I can go to open a bug, or do I need to go
> to the upstream "vendor" bugzila?

Maybe open a new issue and suggest that the current "conversion" should
be upgraded from a warning to a fatal error. Give examples of behavior
you get and behavior you expect. Also give counter examples of behavior
that works as you expect. Don't try to troll, after all it's the
developers forum and it only works if people stay with the facts.
Otherwise it becomes unusable, nobody wants that.

Best way to get it into one of the next releases is to prepare a pull
request that fixes the issue.


-- 
Regards,
Kai

Replies to list-only preferred.


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Dropping core with Systemd.

2017-05-15 Thread Kai Krakow
Am Mon, 15 May 2017 11:54:08 -0400
schrieb Steve Dickson :

> Hello,
> 
> I want rpcbind to drop core so I can debug 
> something but systemd keeps getting in the way
> 
> systemd: rpcbind.service: Main process exited, code=killed,
> status=6/ABRT audit: ANOM_ABEND auid=4294967295 uid=32 gid=32
> ses=4294967295 subj=system_u:system_r:rpcbind_t:s0 pid=2787
> comm="rpcbind" exe="/usr/bin/rpcbind" sig=6 systemd: rpcbind.service:
> Unit entered failed state. audit: SERVICE_STOP pid=1 uid=0
> auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0
> msg='unit=rpcbind comm="systemd" exe="/usr/lib/systemd/systemd"
> hostname=? addr=? terminal=? res=failed' systemd: rpcbind.service:
> Failed with result 'signal'. systemd: Starting RPC Bind... systemd:
> Started RPC Bind.
> 
> How do I stop systemd from restarting rpcbind and allowing
> the process to drop core?
> 
> Note, this problem only happens when systemd starts rpcbind
> and ypbind so I need systemd to start the processes. 

Usually you should find it listed when you run "coredumpctl".

For this to work, sysctl should have:

kernel.core_pattern = |/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %e
kernel.core_pipe_limit = 0
kernel.core_uses_pid = 1

This should be default with a default systemd installation. You may
need to mask/unmask one sysctl file.
Check /usr/lib/sysctl.d/50-coredump.conf.


-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] 11 MB cost of DefaultMemoryAccounting=yes

2017-05-11 Thread Kai Krakow
Am Thu, 11 May 2017 10:26:33 +0200
schrieb Umut Tezduyar Lindskog :

> Hello,
> 
> Even though this is not a systemd problem, I believe systemd mailing
> list is a good place to discuss.
> 
> Our kernel has CONFIG_MEMCG enabled. As soon as we set
> DefaultMemoryAccounting=yes, our system wide memory usage increased 11
> MB. The increase is mostly on kmalloc-* slab memory with the peak on
> kmalloc-32.
> 
> I initially thought the increase is due to systemd creating
> system.slice under /sys/fs/cgroup/memory but I think I am wrong. I
> have run "systemd-run -p MemoryLimit=10M /bin/sleep 5" command while
> DefaultMemoryAccounting=no and there was no significant memory usage.
> 
> I am quite puzzled about where this extra cost is coming from. Does
> anybody have any idea?

I think this is documented in the kernel as far as I know: Memory
accounting needs some extra memory. For swap accounting, it is even
more.

If you look at the kernel documentation: Does this explain your issue?

-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-resolved continuously switching DNS servers

2017-05-10 Thread Kai Krakow
Am Tue, 9 May 2017 20:37:16 +0200
schrieb Lennart Poettering <lenn...@poettering.net>:

> On Tue, 09.05.17 00:42, Kai Krakow (hurikha...@gmail.com) wrote:
> 
> > Am Sat, 6 May 2017 14:22:21 +0200
> > schrieb Kai Krakow <hurikha...@gmail.com>:
> >   
> > > Am Fri, 5 May 2017 20:18:41 +0200
> > > schrieb Lennart Poettering <lenn...@poettering.net>:
> > >   
>  [...]  
>  [...]  
>  [...]  
> > > 
> > > It looks like this all has to do with timeouts:  
> > 
> > Fixed by restarting the router. The cable modem seems to be buggy
> > with UDP packets after a lot of uptime: it simply silently drops UDP
> > packets at regular intervals, WebUI was also very slow, probably a
> > CPU issue.
> > 
> > I'll follow up on this with the cable provider.
> > 
> > When the problem starts to show up, systemd-resolved is affected
> > more by this than direct resolving. I don't know if there's
> > something that could be optimized in systemd-resolved to handle
> > such issues better but I don't consider it a bug in
> > systemd-resolved, it was a local problem.  
> 
> Normally configured DNS servers should be equivalent, and hence
> switching them for each retry should not come at any cost, hence,
> besides the extra log output, do you experience any real issues?

Since I restarted the router, there are no longer any such logs except
maybe a few per day (less than 4).

But when I got those logs spammed to the journal, the real problem was
the DNS resolver taking 10s about once per minute to resolve a website
address - which really was a pita.

But well, what could systemd-resolved have done about it when the real
problem was some network equipment?

I just wonder why it was less visible when directly using those DNS
servers. Since DNS must have been designed with occassional packet loss
in mind (because it uses UDP), there must be a way to handle this
better. So I read a little bit in https://www.ietf.org/rfc/rfc1035.txt.

RFC1035 section 4.2.1 suggests that the retransmission interval for
queries should be 2-5 seconds, depending on statistics of previous
queries. To me, "retransmissions" means the primary DNS server should
not be switched for each query timeout it got (while still allowing to
transfer the same request to the next available server).

RFC1035 section 7 discusses the suggested implementation of the
resolver and covers retransmission and server selection algorithms:

It suggests to record average response time for each server it queries
to select the ones which respond faster first. Without query history,
the selection algorithm should pretend a response time of 5-10 seconds.

It also suggests to switch the primary server only when there was some
"bizarre" error or a server error reply. However, I don't think it
should actually remove them from the list as suggested there since we
are a client resolver, not a server resolver which can update its peer
lists from neighbor servers. However, we could reset query time
statistics to move it to the end of the list, and/or blacklist it for a
while.

Somewhere else in that document I've read that it is well permitted to
interleave multiple parallel requests to multiple DNS servers in the
list. So I guess it would be nice and allowed if systemd-resolved used
more than only one DNS server at the same time by alternating between
them each request - maybe taking the two best according to query time
statistics.

I also guess that it should maybe use shorter timeouts for queries as
you could have more than one DNS server and the initial query time
statistic should pretend 5-10 seconds, while the rotation interval
suggests 2-5 seconds.

I think it would work to have "10 seconds divided by servers count" or
2 seconds, whatever is bigger, as a timeout for query rotation. But a
late reply should still be accepted as pointed out in section 7.3, even
when the query was already rotated to the next DNS server. Using only a
single DNS server can skip all this logic as there's no rotation and
would work with timeouts of 10 seconds.

That way, systemd-resolved would "learn" to use only the fastest DNS
server and when it becomes too slow (which is 5-10 seconds based on the
RFC), it would switch to the next server. If parallel requests come in,
it would use more DNS servers from the list in parallel, auto-sorted by
query reply time. The startup order is the one given by the
administrator (or whatever provides the DNS server list).

Applied to my UDP packet loss (which seem to be single packet losses as
an immediate next request would've got a reply), it would mean that the
systemd resolver gives me an address after 2-3 seconds instead of 5 or
10 because I had 4 DNS servers on that link. This is more or less what
I've seen previously in my situation when I switched back

Re: [systemd-devel] systemd-resolved continuously switching DNS servers

2017-05-08 Thread Kai Krakow
Am Sat, 6 May 2017 14:22:21 +0200
schrieb Kai Krakow <hurikha...@gmail.com>:

> Am Fri, 5 May 2017 20:18:41 +0200
> schrieb Lennart Poettering <lenn...@poettering.net>:
> 
> > On Fri, 05.05.17 01:01, Kai Krakow (hurikha...@gmail.com) wrote:
> >   
> > > Hello!
> > > 
> > > Why is systemd-resolved switching DNS servers all day long? This
> > > doesn't seem to be right...
> > 
> > If you turn on debug logging, you should see an explanation right
> > before each switch. I figure we should choose the log levels more
> > carefully, so that whenever we switch we also log the reason at the
> > same level...  
> 
> It looks like this all has to do with timeouts:

Fixed by restarting the router. The cable modem seems to be buggy with
UDP packets after a lot of uptime: it simply silently drops UDP
packets at regular intervals, WebUI was also very slow, probably a CPU
issue.

I'll follow up on this with the cable provider.

When the problem starts to show up, systemd-resolved is affected more
by this than direct resolving. I don't know if there's something that
could be optimized in systemd-resolved to handle such issues better but
I don't consider it a bug in systemd-resolved, it was a local problem.

Thanks,
Kai

 
> Mai 06 14:17:09 jupiter systemd-resolved[5585]: Cache miss for
> ssl.gstatic.com IN  Mai 06 14:17:09 jupiter
> systemd-resolved[5585]: Transaction 54375 for  > scope dns on enp5s0/*. Mai 06 14:17:09 jupiter
> > systemd-resolved[5585]: Using feature level UDP for transaction
> > 54375. Mai 06 14:17:09 jupiter systemd-resolved[5585]: Using
> > DNS server fe80::b248:7aff:fee7:f438%2 for transaction 54375.
> Mai 06 14:17:09 jupiter systemd-resolved[5585]: Sending query packet
> with id 54375. Mai 06 14:17:09 jupiter systemd-resolved[5585]:
> Timeout reached on transaction 33004. Mai 06 14:17:09 jupiter
> systemd-resolved[5585]: Retrying transaction 33004. Mai 06 14:17:09
> jupiter systemd-resolved[5585]: Switching to DNS server
> 2a02:8109:1ec0:6f5:5667:51ff:feea:385f for interface enp5s0. Mai 06
> 14:17:09 jupiter systemd-resolved[5585]: Cache miss for
> ssl.gstatic.com IN A Mai 06 14:17:09 jupiter systemd-resolved[5585]:
> Transaction 33004 for  scope dns on enp5s0/*.
> Mai 06 14:17:09 jupiter systemd-resolved[5585]: Using feature level
> UDP for transaction 33004. Mai 06 14:17:09 jupiter
> systemd-resolved[5585]: Using DNS server
> 2a02:8109:1ec0:6f5:5667:51ff:feea:385f for transaction 33004. Mai 06
> 14:17:09 jupiter systemd-resolved[5585]: Sending query packet with id
> 33004. Mai 06 14:17:09 jupiter systemd-resolved[5585]: Processing
> incoming packet on transaction 33004. (rcode=SUCCESS) Mai 06 14:17:09
> jupiter systemd-resolved[5585]: Not validating response for 33004,
> used server feature level does not support DNSSEC. Mai 06 14:17:09
> jupiter systemd-resolved[5585]: Added positive unauthenticated cache
> entry for ssl.gstatic.com IN A 143s on
> */INET6/2a02:8109:1ec0:6f5:5667:51ff:feea:385f Mai 06 14:17:09
> jupiter systemd-resolved[5585]: Transaction 33004 for
>  on scope dns on enp5s0/* now complete with
>  from network (unsigned). Mai 06 14:17:09 jupiter
> systemd-resolved[5585]: Sending response packet with id 42127 on
> interface 1/AF_INET. Mai 06 14:17:09 jupiter systemd-resolved[5585]:
> Sending response packet with id 22131 on interface 1/AF_INET. Mai 06
> 14:17:09 jupiter systemd-resolved[5585]: Processing incoming packet
> on transaction 54375. (rcode=SUCCESS) Mai 06 14:17:09 jupiter
> systemd-resolved[5585]: Not validating response for 54375, used
> server feature level does not support DNSSEC. Mai 06 14:17:09 jupiter
> systemd-resolved[5585]: Added positive unauthenticated cache entry
> for ssl.gstatic.com IN  203s on
> enp5s0/INET6/fe80::b248:7aff:fee7:f438 Mai 06 14:17:09 jupiter
> systemd-resolved[5585]: Transaction 54375 for  > on scope dns on enp5s0/* now complete with  from
> > network (unsigned). Mai 06 14:17:09 jupiter
> > systemd-resolved[5585]: Freeing transaction 33004. Mai 06
> > 14:17:09 jupiter systemd-resolved[5585]: Sent message
> > type=method_return sender=n/a destination=:1.352 object=n/a
> > interface=n/a member=n/a cookie=234 reply_cookie=2 error=n/a
> > Mai 06 14:17:09 jupiter systemd-resolved[5585]: Sent message
> > type=method_call sender=n/a destination=org.freedesktop.DBus
> > object=/org/freedesktop/DBus interface=org.freedesktop.DBus
> > member=RemoveMatch cookie=235 reply_cookie=0 erro Mai 06
> > 14:17:09 jupiter systemd-resolved[5585]: Got message
> > type=method_return sender=org.freedesktop.DBus
> > destination=:1.273 object=n/a interface=n/a member=n/a
>

Re: [systemd-devel] systemd-resolved continuously switching DNS servers

2017-05-06 Thread Kai Krakow
Am Fri, 5 May 2017 20:18:41 +0200
schrieb Lennart Poettering <lenn...@poettering.net>:

> On Fri, 05.05.17 01:01, Kai Krakow (hurikha...@gmail.com) wrote:
> 
> > Hello!
> > 
> > Why is systemd-resolved switching DNS servers all day long? This
> > doesn't seem to be right...  
> 
> If you turn on debug logging, you should see an explanation right
> before each switch. I figure we should choose the log levels more
> carefully, so that whenever we switch we also log the reason at the
> same level...

It looks like this all has to do with timeouts:

Mai 06 14:17:09 jupiter systemd-resolved[5585]: Cache miss for ssl.gstatic.com 
IN 
Mai 06 14:17:09 jupiter systemd-resolved[5585]: Transaction 54375 for 
 scope dns on enp5s0/*.
Mai 06 14:17:09 jupiter systemd-resolved[5585]: Using feature level UDP for 
transaction 54375.
Mai 06 14:17:09 jupiter systemd-resolved[5585]: Using DNS server 
fe80::b248:7aff:fee7:f438%2 for transaction 54375.
Mai 06 14:17:09 jupiter systemd-resolved[5585]: Sending query packet with id 
54375.
Mai 06 14:17:09 jupiter systemd-resolved[5585]: Timeout reached on transaction 
33004.
Mai 06 14:17:09 jupiter systemd-resolved[5585]: Retrying transaction 33004.
Mai 06 14:17:09 jupiter systemd-resolved[5585]: Switching to DNS server 
2a02:8109:1ec0:6f5:5667:51ff:feea:385f for interface enp5s0.
Mai 06 14:17:09 jupiter systemd-resolved[5585]: Cache miss for ssl.gstatic.com 
IN A
Mai 06 14:17:09 jupiter systemd-resolved[5585]: Transaction 33004 for 
 scope dns on enp5s0/*.
Mai 06 14:17:09 jupiter systemd-resolved[5585]: Using feature level UDP for 
transaction 33004.
Mai 06 14:17:09 jupiter systemd-resolved[5585]: Using DNS server 
2a02:8109:1ec0:6f5:5667:51ff:feea:385f for transaction 33004.
Mai 06 14:17:09 jupiter systemd-resolved[5585]: Sending query packet with id 
33004.
Mai 06 14:17:09 jupiter systemd-resolved[5585]: Processing incoming packet on 
transaction 33004. (rcode=SUCCESS)
Mai 06 14:17:09 jupiter systemd-resolved[5585]: Not validating response for 
33004, used server feature level does not support DNSSEC.
Mai 06 14:17:09 jupiter systemd-resolved[5585]: Added positive unauthenticated 
cache entry for ssl.gstatic.com IN A 143s on 
*/INET6/2a02:8109:1ec0:6f5:5667:51ff:feea:385f
Mai 06 14:17:09 jupiter systemd-resolved[5585]: Transaction 33004 for 
 on scope dns on enp5s0/* now complete with  
from network (unsigned).
Mai 06 14:17:09 jupiter systemd-resolved[5585]: Sending response packet with id 
42127 on interface 1/AF_INET.
Mai 06 14:17:09 jupiter systemd-resolved[5585]: Sending response packet with id 
22131 on interface 1/AF_INET.
Mai 06 14:17:09 jupiter systemd-resolved[5585]: Processing incoming packet on 
transaction 54375. (rcode=SUCCESS)
Mai 06 14:17:09 jupiter systemd-resolved[5585]: Not validating response for 
54375, used server feature level does not support DNSSEC.
Mai 06 14:17:09 jupiter systemd-resolved[5585]: Added positive unauthenticated 
cache entry for ssl.gstatic.com IN  203s on 
enp5s0/INET6/fe80::b248:7aff:fee7:f438
Mai 06 14:17:09 jupiter systemd-resolved[5585]: Transaction 54375 for 
 on scope dns on enp5s0/* now complete with  
from network (unsigned).
Mai 06 14:17:09 jupiter systemd-resolved[5585]: Freeing transaction 33004.
Mai 06 14:17:09 jupiter systemd-resolved[5585]: Sent message type=method_return 
sender=n/a destination=:1.352 object=n/a interface=n/a member=n/a cookie=234 
reply_cookie=2 error=n/a
Mai 06 14:17:09 jupiter systemd-resolved[5585]: Sent message type=method_call 
sender=n/a destination=org.freedesktop.DBus object=/org/freedesktop/DBus 
interface=org.freedesktop.DBus member=RemoveMatch cookie=235 reply_cookie=0 erro
Mai 06 14:17:09 jupiter systemd-resolved[5585]: Got message type=method_return 
sender=org.freedesktop.DBus destination=:1.273 object=n/a interface=n/a 
member=n/a cookie=181 reply_cookie=235 error=n/a
Mai 06 14:17:09 jupiter systemd-resolved[5585]: Freeing transaction 54375.

I just don't unterstand why, because all these nameservers work
perfectly well when used directly and not through the stub resolver.


-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] systemd-resolved continuously switching DNS servers

2017-05-04 Thread Kai Krakow
Hello!

Why is systemd-resolved switching DNS servers all day long? This
doesn't seem to be right...

Mai 05 00:52:46 jupiter systemd-resolved[658]: Switching to DNS server
192.168.4.254 for interface enp5s0. Mai 05 00:52:53 jupiter
systemd-resolved[658]: Switching to DNS server
fe80::b248:7aff:fee7:f438%2 for interface enp5s0. Mai 05 00:52:53
jupiter systemd-resolved[658]: Switching to DNS server
2a02:8109:1ec0:6f5:5667:51ff:feea:385f for interface enp5s0. Mai 05
00:52:58 jupiter systemd-resolved[658]: Switching to DNS server
192.168.4.254 for interface enp5s0. Mai 05 00:52:59 jupiter
systemd-resolved[658]: Switching to DNS server
fe80::b248:7aff:fee7:f438%2 for interface enp5s0. Mai 05 00:53:02
jupiter systemd-resolved[658]: Switching to DNS server
2a02:8109:1ec0:6f5:5667:51ff:feea:385f for interface enp5s0. Mai 05
00:53:06 jupiter systemd-resolved[658]: Switching to DNS server
192.168.4.254 for interface enp5s0. Mai 05 00:53:07 jupiter
systemd-resolved[658]: Switching to DNS server
fe80::b248:7aff:fee7:f438%2 for interface enp5s0. Mai 05 00:53:12
jupiter systemd-resolved[658]: Switching to DNS server
2a02:8109:1ec0:6f5:5667:51ff:feea:385f for interface enp5s0. Mai 05
00:53:33 jupiter systemd-resolved[658]: Switching to DNS server
192.168.4.254 for interface enp5s0. Mai 05 00:53:35 jupiter
systemd-resolved[658]: Switching to DNS server
fe80::b248:7aff:fee7:f438%2 for interface enp5s0. Mai 05 00:53:40
jupiter systemd-resolved[658]: Switching to DNS server
2a02:8109:1ec0:6f5:5667:51ff:feea:385f for interface enp5s0. Mai 05
00:54:01 jupiter systemd-resolved[658]: Switching to DNS server
192.168.4.254 for interface enp5s0. Mai 05 00:54:02 jupiter
systemd-resolved[658]: Switching to DNS server
fe80::b248:7aff:fee7:f438%2 for interface enp5s0. Mai 05 00:54:08
jupiter systemd-resolved[658]: Switching to DNS server
2a02:8109:1ec0:6f5:5667:51ff:feea:385f for interface enp5s0.

Also, name resolving seems to be very slow. When I directly use these
name servers in resolv.conf instead of going through the stub resolver,
there are no slow-downs and these logs stop.


-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] journal fragmentation on Btrfs

2017-04-17 Thread Kai Krakow
Am Mon, 17 Apr 2017 16:01:48 +0200
schrieb Kai Krakow <hurikha...@gmail.com>:

> > We also ask btrfs to defrag the file as soon as we mark it as
> > archived...  
> 
> This makes sense. And I've learned that journal on btrfs works much
> better if you use many small files vs. a few big files. I've currently
> set the journal size limit to 8 MB for that reason which gives me very
> good performance.

Hmm well, just looked, I eventually stopped doing that, probably when
you introduced defragging the archived journals. But I see no journal
file being bigger than 128M which seems to work well.

-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] journal fragmentation on Btrfs

2017-04-17 Thread Kai Krakow
Am Mon, 17 Apr 2017 11:57:21 +0200
schrieb Lennart Poettering :

> On Sun, 16.04.17 14:30, Chris Murphy (li...@colorremedies.com) wrote:
> 
> > Hi,
> > 
> > This is on a Fedora 26 workstation (systemd-233-3.fc26.x86_64)
> > that's maybe a couple weeks old and was clean installed. Drive is
> > NVMe.
> > 
> > 
> > # filefrag *
> > system.journal: 9283 extents found
> > user-1000.journal: 3437 extents found
> > # lsattr
> > C-- ./system.journal
> > C-- ./user-1000.journal
> > 
> > I do manual snapshots before software updates, which means new
> > writes to these files are subject to COW, but additional writes to
> > the same extents are overwrites and are not COW because of chattr
> > +C. I've used this same strategy for a long time, since
> > systemd-journald defaults to +C for journal files; but I've not
> > seen them get this fragmented this quickly.
> >  
> 
> IIRC NOCOW only has an effect if set right after the file is created
> before the first write to it is done. Or in other words, you cannot
> retroactively make a file NOCOW. This means that if you in one way or
> another make a COW copy of a file (through reflinking — implicit or
> not, note that "cp" reflinks by default — or through snapshotting or
> something else) the file is COW and you'll get fragmentation.

To mark a file nocow, it has to exist with zero bytes and never
been written to. The nocow attribute (chattr +C) will be inherited from
the directory upon creation of a file. So the best way to go is setting
+C on the directory and all future files of the journal would be nocow.

You can still do snapshots, nocow doesn't prohibit that and doesn't
make journals cow again. What happens is that btrfs simply unshares
extents as soon as you write to the snapshot. The newly created extent
itself will behave like nocow again. If the extents are big enough,
this shouldn't introduce any serious fragmentation, just waste space.
Btrfs won't split extents upon unsharing them during a write. It may,
however, "replace" only part of the unshared extent thus making three
new: two sharing the old copy, one having the new data. But since
journals are append only, that should be no problem. It's just that the
data is written so slowly that writes almost never become combined into
one single writes, resulting in many extents.

> I am not entirely sure what to recommend you. Ultimately whether btrfs
> fragments or not, is probably something you have to discuss with the
> btrfs folks. We do try to make the best of btrfs, by managing the COW
> flag, but this only helps you to a limited degree as
> snapshots/reflinks will fuck things up anyway...

Well, usually you shouldn't have to manage the cow flag at all: Just
set it once for the newly created journal directory and everything is
fine. And even then, people may not want this so they could easily
unset the flag on the directory and rotate the journal.

> We also ask btrfs to defrag the file as soon as we mark it as
> archived...

This makes sense. And I've learned that journal on btrfs works much
better if you use many small files vs. a few big files. I've currently
set the journal size limit to 8 MB for that reason which gives me very
good performance.

> I'd even be willing to extend on that, and defrag the file
> on other events too, for example if it ends up being too heavily
> fragmented.

Since the append behavior of btrfs is so bad wrt journal files, it
should be enough to simply let btrfs defrag the previous written
journal block upon append the file: Lennart, I think you are hinting the
OS that the file is going to grow and thus truncate it to 8 MB beyond
the current end of file to continue writing. That would be a good event
to let btrfs defrag the old 8 MB block (and just that, not the complete
file). If this works well, you could maybe skip defragging the complete
file upon rotation which should improve disk io performance during
rotation.

I think the default extent size hint for defragging with btrfs defrag
has been set to 32 MB lately, so it would be enough to maybe do the
above step every 32 MB.

> But last time I looked btrfs didn't have any nice API for
> that, that would have a clear focus on a single file only...

The high number of extents may not be an indicator for fragmentation
when btrfs compression is used. Compressed data will be organized in
logical 128k units which are reported as fragments to filefrag, in
reality they are laid out continuously on disk, so no fragmentation.
It would be interesting to see the blockmap of this.

-- 
Regards,
Kai

Replies to list-only preferred.


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] My experience with MySQL and systemctl

2017-04-11 Thread Kai Krakow
Am Tue, 11 Apr 2017 15:08:35 +0200
schrieb Lennart Poettering :

> > Eventually, after checking all our backups, I decided to issue kill
> > -9 to mysqld. I then decided to try restarting the daemon using
> > systemctl. It did start up, the log output showed the crash recovery
> > procedure, but because it entered into the rollback recovery,
> > systemctl never considered that the process had finished starting
> > up, and then tried to kill it again, which failed (only kill -9
> > would work in this case). Again, log output was closed.  
> 
> Again, we really don't do that. Logging is actually pretty independent
> from service management when it comes to keeping the connections open:
> all systemd does is set up the connection, fork off the process and
> that's it. The forked processes can keep the connections open as long
> as they wish, or even pass them on to other processes if they like...
> 
> That said, there's one long-standing race: if a process logs through
> syslog(), and very quickly after that exits we under some
> circumstances have trouble matching up the log output with the
> originating service. That's because the SCM_CREDENTIAL data we get for
> the logged messages might reference a PID which already disappeared
> from /proc/$PID which we hence cannot read the cgroup membership info
> from.
> 
> Hence: maybe the logs are there, but the filtering didn't work the way
> you expect?

Chances are that he's using the mysqld_safe script which does all kinds
of stuff in a bash script, like detaching the daemon, pipe the logging,
do process monitoring. Essentially, what systemd can already do just
better. It's also doing the "kill -9" stuff.

I'd like to know first if the daemon is properly started by systemd
itself and not some intermediate script.

I'm using the following drop-in as a starter:

$ cat /etc/systemd/system/mysqld.service.d/override.conf
[Service]
Type=simple
Restart=on-failure
ExecStart=
ExecStart=/usr/sbin/mysqld --defaults-file=/etc/mysql/my.cnf 
--pid-file=/run/mysqld/mysqld.pid
RuntimeDirectory=mysqld
RuntimeDirectoryMode=0755
WorkingDirectory=/var/lib/mysql


-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] more verbose debug info than systemd.log_level=debug?

2017-04-10 Thread Kai Krakow
Am Mon, 10 Apr 2017 13:54:27 +0200
schrieb Lennart Poettering <lenn...@poettering.net>:

> On Mon, 10.04.17 13:43, Kai Krakow (hurikha...@gmail.com) wrote:
> 
> > Am Mon, 10 Apr 2017 11:04:45 +0200
> > schrieb Lennart Poettering <lenn...@poettering.net>:
> >   
>  [...]  
> > > 
> > > Yeah, we do what we can.
> > > 
> > > But I seriously doubt FIFREEZE will make things better. It's just
> > > going to make shutdowns hang every now and then.  
> > 
> > It could simply thaw the FS again after freeze to somewhat improve
> > on that. At least everything that should be flushed is now flushed
> > at that point and grub et al should be happy.
> > 
> > But I wonder why filesystems not just flush the journal on
> > remount-ro? It may take a while but I think that can be perfectly
> > expected when rmounting ro: At least I would expect that this
> > forces out all pending writes to the filesystem hence flushing the
> > journal.  
> 
> Well, the remount-ro doesn't succeed in the case this is all about:
> the plymouth process appears to run off the root fs and keeps the
> executable pinned, which was deleted because updated, and thus the
> kernel will refuse the remount. See other mail.

Ah okay, so given that case, a journal flush even isn't attempted, it
fails right away. My first idea was that it should flush the journal
but can fail anyways. I didn't get that point. Thus my assumption that
remount-ro doesn't flush the journal.

> > So a final freeze/thaw cycle is probably the only way to go? As it
> > specifies what is needed here to be compatible with configurations
> > that involve grub on complex filesystems.  
> 
> A pair of FIFREEZE+FITHAW are likely to work, but it's frickin' ugly
> (see other mails), and I'd certainly prefer if the fs folks would
> provide a proper ioctl/syscall for the operation we need. Quite
> frankly it doesn't appear like a particularly exotic operation, in
> fact the operation we'd need would probably be run much more often
> than the operation that FIFREEZE/FITHAW was introduced for...

Yes it's ugly and there should be a proper ioctl/syscall for the exact
semantics needed. Usually, working around such missing APIs only
results in the needed bits never implemented. I totally understand your
point. ;-)


-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] high latency on systemd-resolved repsonses of LLMNR names

2017-04-10 Thread Kai Krakow
Am Mon, 10 Apr 2017 12:46:02 +0200
schrieb Lennart Poettering <lenn...@poettering.net>:

> On Mon, 10.04.17 12:41, Kai Krakow (hurikha...@gmail.com) wrote:
> 
> > > Queries and responses in LLMNR are supposed to be delayed by a
> > > random time up to 100ms according to the RFC. See:
> > > 
> > > https://tools.ietf.org/html/rfc4795 section 2.7, and section 7.
> > > 
> > > If you add up the delay for the query and the response you'll get
> > > a delay up to 200ms for a full transaction.  
> > 
> > Well it seems a bit more complicated:
> > 
> > The random delays are a combined value of jitter and timeout. And it
> > depends on whether you're currently the sender or responder:
> > Responders have shorter timeouts (only jitter), according to
> > section 2.7.
> > 
> > It also says SHOULD wait (so it is a good idea), and it says the
> > jitter MAY be ignored if the responder knows the name is unique.
> > Only the sender MUST wait for timeout+jitter.
> > 
> > This is where the 200ms come from, but it may even be 1000+100ms if
> > you are connected to none-IEEE 802.* media, e.g. VPN or PPP
> > interfaces and LLMNR is configured to listen on these, as far as I
> > understand section 7.
> > 
> > According to RFC2119, the terminology SHOULD suggests that systemd
> > could maybe make this configurable? Maybe taking the proper
> > warnings for this configuration into account for administrators...
> > Still you would see delays of at least 100ms then because the
> > sender MUST wait.  
> 
> I am not a fan of configuration options in zeroconfey technology like
> this one I must say. I mean "zeroconf" is about "zero configuration",
> hence making it configurable creates kind of a tautology...

As I pointed out it wouldn't have a big effect anyways, so probably
you're perfectly right.

Is there a way to know the delays used, i.e. if it 1000 or 100ms?


-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] more verbose debug info than systemd.log_level=debug?

2017-04-10 Thread Kai Krakow
Am Mon, 10 Apr 2017 11:04:45 +0200
schrieb Lennart Poettering :

> > Remember, all of this is because there *is* software that does the
> > wrong thing, and it *is* possible for software to hang and be
> > unkillable. It would be good for systemd to do the right thing even
> > in the presence of that kind of software.  
> 
> Yeah, we do what we can.
> 
> But I seriously doubt FIFREEZE will make things better. It's just
> going to make shutdowns hang every now and then.

It could simply thaw the FS again after freeze to somewhat improve on
that. At least everything that should be flushed is now flushed at that
point and grub et al should be happy.

But I wonder why filesystems not just flush the journal on remount-ro?
It may take a while but I think that can be perfectly expected when
rmounting ro: At least I would expect that this forces out all pending
writes to the filesystem hence flushing the journal.

Tho, readonly mounts do not guarantee the filesystem not modifying the
underlying storage device. For example, btrfs can modify the storage
even when mounting an unmounted fs in ro mode. It guarantees readonly
from user-space perspective - and I think that's totally on par with the
specs of "mount -o ro".

So a final freeze/thaw cycle is probably the only way to go? As it
specifies what is needed here to be compatible with configurations that
involve grub on complex filesystems.

Then, what's with underlying cache infrastructures like BBU-supported
RAID caches? We had systems that failed on reboot because the BBU was
in relearning cycle at reboot and the controller thus refused to replay
the write-cache during POST and instead discarded it. That can really
create you a big mess, btw. Tho, I think that's a controller bug: The
writeback wasn't set to always writeback but only when it's safe. But
this suggests that the reboot code should even force some cache flush
for those components.

Taken everything into account it boils down to eventually not using
grub on XFS but only simple filesystems, or depend on ESP only for
booting. Everything else only means that systemd (and other init
systems) have to invent a huge complex mess to fix everything that
isn't done right by other involved software.

-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] high latency on systemd-resolved repsonses of LLMNR names

2017-04-10 Thread Kai Krakow
Am Mon, 10 Apr 2017 10:26:14 +0200
schrieb Lennart Poettering :

> On Sun, 09.04.17 19:22, Paul Freeman (p...@coredev.org.uk) wrote:
> 
> > Hi,
> >  We are seeing high latency (>100ms) when resolving local names via
> >  LLMNR.  
> 
> Queries and responses in LLMNR are supposed to be delayed by a random
> time up to 100ms according to the RFC. See:
> 
> https://tools.ietf.org/html/rfc4795 section 2.7, and section 7.
> 
> If you add up the delay for the query and the response you'll get a
> delay up to 200ms for a full transaction.

Well it seems a bit more complicated:

The random delays are a combined value of jitter and timeout. And it
depends on whether you're currently the sender or responder: Responders
have shorter timeouts (only jitter), according to section 2.7.

It also says SHOULD wait (so it is a good idea), and it says the jitter
MAY be ignored if the responder knows the name is unique. Only the
sender MUST wait for timeout+jitter.

This is where the 200ms come from, but it may even be 1000+100ms if
you are connected to none-IEEE 802.* media, e.g. VPN or PPP interfaces
and LLMNR is configured to listen on these, as far as I understand
section 7.

According to RFC2119, the terminology SHOULD suggests that systemd could
maybe make this configurable? Maybe taking the proper warnings for this
configuration into account for administrators... Still you would see
delays of at least 100ms then because the sender MUST wait.

-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Add DATADOS, fat32 to fstab file

2017-04-06 Thread Kai Krakow
Am Fri, 7 Apr 2017 00:03:16 +0200
schrieb Reindl Harald :

> Am 06.04.2017 um 22:15 schrieb Gary Evans:
> > I tried to copy
> >
> > UUID=B813-BB28  /boot/efi  vfat umask=0077 0  1
> >
> > for my DOS data partition but it caused Debian not to boot. This is
> > how it's configured:
> >
> > UUID=202E-E8BE  /DATADOS  vfat  umask=0077  0  1
> >
> > Where did this go awry? Please help.  
> 
> "caused Debian not to boot" is no valueable information
> http://www.catb.org/esr/faqs/smart-questions.html#beprecise
> 
> "i tried to copy" - see above
> 
> UUID=B813-BB28 != UUID=202E-E8BE
> 
> if anything in /etc/fstab is not available at boot and not dfined
> with "nofail" you get a emergency shell because a as mandatory
> defined filesystem is missing - however, without a crystal ball
> nobody knows what you are trying to achive and what happens...

While you raise a good point with the "crystal ball", /boot/efi is
absolutely not essential for starting the system after the boot
loader, in a standard systemd configuration this should even automount
so missing it won't halt booting. As long as the contents of /boot/efi
were not touched, the system should boot fine, except it is an EFI
system and the "new" DOS partition has been marked as ESP (maybe by
cloning the partition entry).

But well, the crystal ball...

At a first glance this looks like /boot/efi has been reformatted to be
used for DOS, and then mounted as /DATADOS. Now the ESP is gone. So
actually, it's not Debian that's no longer booting, it's probably the
boot loader that no longer works. But that actually is not systemd's
fault, neither systemd has any responsibility whatsoever.

But the OP is coming here nevertheless, which suggests that indeed
systemd spools an error message that it can no longer mount /boot/efi,
and it's not been marked as nofail.

If this is the case, I can only imagine that the ESP has been
reformatted to act as a DOS partition and maybe the contents have been
copied back. The system only still boots, because a non-EFI boot loader
has been installed.

First suggestion: Don't touch the ESP. The ESP should even not be
visible to DOS afaict, so the OP probably started to fiddle around
until it "worked", and by that broke the rest.

-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Temporarily stopping a service while oneshot is running

2017-03-30 Thread Kai Krakow
Am Thu, 30 Mar 2017 16:13:25 +0200
schrieb Lennart Poettering :

> On Tue, 21.03.17 07:47, Ian Pilcher (arequip...@gmail.com) wrote:
> 
> > I have a oneshot service (run from a timer) that updates the TLS
> > certificates in my mod_nss database.  Because NSS doesn't support
> > concurrent access to the database, I need to temporarily shut down
> > Apache while the certificate update service is running.
> > 
> > Currently, I'm using the following entries in my .service file to
> > accomplish this:
> > 
> >   [Unit]
> >   Description=Update TLS certificates in mod_nss database
> >   # Restart Apache, even if this service fails for some reason
> >   OnFailure=httpd.service
> > 
> >   [Service]
> >   Type=oneshot
> >   # Shut down Apache to avoid concurrent access to the mod_nss
> > database ExecStartPre=/usr/bin/systemctl stop httpd.service
> >   ExecStart=/usr/local/bin/update-nss-certs
> >   ExecStartPost=/usr/bin/systemctl start httpd.service
> > 
> > Is this the best way to do this?  (I can't escape the feeling that
> > there ought to be a more idiomatic way of accomplishing this.)  
> 
> Yes, this appears to be the best, and simplest way to do this to me.

Isn't there a chance that this introduces races when triggered while a
bigger transaction is being executed by systemd? It always feels wrong
to trigger actions that may affect the transaction that is currently
being executed...

-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Temporarily stopping a service while oneshot is running

2017-03-21 Thread Kai Krakow
Am Tue, 21 Mar 2017 07:47:59 -0500
schrieb Ian Pilcher :

> I have a oneshot service (run from a timer) that updates the TLS
> certificates in my mod_nss database.  Because NSS doesn't support
> concurrent access to the database, I need to temporarily shut down
> Apache while the certificate update service is running.
> 
> Currently, I'm using the following entries in my .service file to
> accomplish this:
> 
>[Unit]
>Description=Update TLS certificates in mod_nss database
># Restart Apache, even if this service fails for some reason
>OnFailure=httpd.service
> 
>[Service]
>Type=oneshot
># Shut down Apache to avoid concurrent access to the mod_nss
> database ExecStartPre=/usr/bin/systemctl stop httpd.service
>ExecStart=/usr/local/bin/update-nss-certs
>ExecStartPost=/usr/bin/systemctl start httpd.service
> 
> Is this the best way to do this?  (I can't escape the feeling that
> there ought to be a more idiomatic way of accomplishing this.)

Would "Conflicts=" help here?

Or you simply do not use this as a service but better define a drop-in
for httpd.service:

# systemctl edit httpd.service

[Service]
ExecStartPre=-/usr/local/bin/update-nss-certs


Now, upon starting (or restarting) the httpd.service, the certs are
updated. You can now program the timer to restart httpd. The minus in
front makes failing to do so non-fatal to the service startup.

-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Can a systemd --user instance rely on After= of systemd --system instance?

2017-03-06 Thread Kai Krakow
Am Sat, 4 Mar 2017 22:07:57 +0300
schrieb Andrei Borzenkov :

> 04.03.2017 13:49, Peter Hoeg пишет:
> > Hi,
> >   
> >> If I have a user service which needs to have the system database
> >> server available: How do I construct a proper depend?  
> > 
> > As Lennart was pointing out, the user and system instances do not
> > know anything about each other, so you cannot.
> >   
> 
> "You cannot because it is currently unimplemented", "you cannot
> because we do not care but feel free to implement" or "you cannot
> because we will never even consider it"?
> 
> > The 2 other options I can think of:
> > 
> > a) Run a system service specifying your user id in User=
> >   
> 
> The problem has nothing to do with ownership so it won't help.

I'm running it in the user context for different reasons. I don't think
that setting the UID is the same. Additionally, I want users to be able
to restart their services during deployments.

> > b) Enable socket activation (if possible) on the system instance
> > database. That way your user instance will simply wait on the socket
> > until the server comes up.
> >   
> 
> This means you need to set absurdly large TimeoutStart because you
> have no idea when other service appears, so every time you attempt to
> start service you will need to wait absurdly large time before
> proceeding with error handling. Proper service dependency won't even
> attempt to start dependent service until dependencies are known to
> run.

Yes, and setting such TimeoutStart values is not meant to be the fix,
I'm sure. The database service here takes 2-3 minutes to fully start
and be ready. This is due to high IO load during startup but also due
to fully checking the database from the service ExecPost.

> systemd already propagates some unit types (devices, mounts and swaps)
> from system to user instance. Are there any fundamental problems that
> prevent doing it for other types?

I'd imagine something like Requires=system/mysqld.service to prefix the
dependency to the system instance.


-- 
Regards,
Kai

Replies to list-only preferred.


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Can a systemd --user instance rely on After= of systemd --system instance?

2017-03-03 Thread Kai Krakow
Am Sun, 26 Feb 2017 21:35:27 +0100
schrieb Lennart Poettering :

> On Sat, 25.02.17 17:34, Patrick Schleizer
> (patrick-mailingli...@whonix.org) wrote:
> 
> > Hi,
> > 
> > I read, that a systemd --user instance cannot use Requires=.
> > 
> > But what about After=? Can a systemd --user instance use
> > After=some-system.service?  
> 
> The units of the --user instance live in an entirely disjunct
> namespace from those in the --system instance. Hence yes, you can
> absolutely use After= and/or Requires= between two user services, but
> it will always just be between two *user* services, and never between
> a user and a system service, since the unit state engines of the
> system and user instance are completely disconnected, as said.

Which brings me back to something I wondered about:

If I have a user service which needs to have the system database server
available: How do I construct a proper depend?

Currently, my user services time out during boot because the database
server is simply not ready fast enough. Thus I'd like to trigger
starting those services only after the database server is ready.

Even putting "Requires" and "After" into the user@ template doesn't
seem to respect this... (or I'm missing some secondary dependency)

My next attempt would be to fire up user sessions with a timer only
after a certain time has passed after boot. But that doesn't feel
right...

-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [networkd] Mixing DHCP & static IPs on 1 interface

2017-02-22 Thread Kai Krakow
Am Tue, 21 Feb 2017 18:30:07 -0600
schrieb Ian Pilcher :

> I'm trying to find a way to do this with systemd-networkd.
> 
> The reason is that my cable modem listens on a 192.168.X.X address.
> Normally this "just works".  My firewall tries to send traffic
> destined for this address to my ISP's router, and the cable modem
> intercepts the packets and responds.
> 
> If I lose connectivity, however, my firewall doesn't have a default
> route, so it doesn't know where to send packets destined for
> 192.168.X.X.  The net result is that I lose connectivity to my cable
> modem's diagnostic pages at exactly the time that I need to access
> them. (OK, I don't really lose connectivity; I just have to manually
> add an IP address on the proper subnet to the firewall's external
> interface. It works, but it's so ... MANUAL!  :-)
> 
> My goal is to have both the DHCP assigned address (from my ISP) and
> the static address always configured on the external interface.  I've
> tried creating two separate .network files that match the interface,
> but only the DHCP address is getting assigned.  (The old network
> service actually is able to set this up on boot, but the static IP
> eventually goes away. I suspect that dhclient is deleting it when it
> renews its lease.)

The difference may be that the previous network script created alias
interfaces, like eth0:0, eth0:1...

You could try to create an alias interface with systemd-networkd, and
assign that the static IP. But how to do this is currently beyond my
knowledge.

-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] 2017: systemd-readahead?

2017-02-06 Thread Kai Krakow
Am Mon, 6 Feb 2017 21:52:09 +0100
schrieb Lennart Poettering :

> On Tue, 10.01.17 12:14, Che (comandantegri...@gmail.com) wrote:
> 
> > What's the 2017 status of systemd-readahead..? Permanently dead?  
> 
> Well, we removed it from the systemd tree, and suggested other folks
> to take it over. I am not aware that anyone did.

Gentoo extracted a snapshot of the last git tree containing readahead.
The build then configures the tree to only build the readahead binary.
I think it needs to be patched from time to time to still build with
updated system dependencies but currently I didn't see a version bump
yet.

However, I finally ditched it. I figured it will slow down boot of an
aging system. Instead, I added bcache to my spinning rust plus an
affordable SSD. This works very well and reduces boot times much better
than readahead (by a factor of at least 5).

The ebuild contents should give you a clue how it's done:

https://packages.gentoo.org/packages/sys-apps/systemd-readahead

-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Stable interface names even when hardware is added or removed, not true

2016-12-01 Thread Kai Krakow
Am Thu, 17 Nov 2016 18:53:53 +0100
schrieb Lennart Poettering :

> On Wed, 16.11.16 23:19, Pekka Sarnila (sarn...@adit.fi) wrote:
> 
> > Well my first point was that the web page should not say
> >   
>  [...]  
> 
> I now added a small extension to this line: "(to the level the
> firmware permits this)" ot clarify that we are bound by firmware
> limitations for this.

Well, I think here's a common misconception about what people
understand, and what you would like them to understand.

After reading all this I asked myself: What's the point of stable
interface names anyways if it's going to change upon device add/remove?
And I think that's actually the point: Name stability is not for device
add/remove (in an ideal world, it would be). Your intention is to have
stable names across reboots and remove races from device detection.

I think this should be pointed out better. In the common case, with
usual firmwares out there, names change in unpredictable ways if you
swap hardware. This, of course, totally reverses what the man page says
about "even when hardware is added/removed"...

-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] after=user.slice not enforced

2016-11-24 Thread Kai Krakow
Am Wed, 23 Nov 2016 09:14:34 +0100
schrieb Cédric BRINER :

> Hi,
> 
> For the context, we are trying to stop a daemon launched by a user...
> 
> >> Hi,
> >>
> >> sapRunning service contains a "After=user.slice". But at the
> >> shutdown, a process (write-sysv-test.pl) running in user.slice is
> >> killed before the end of the sapRunning's stop.  
> > 
> > Slices are a concept for resource management, and that's what they
> > should be used for. Do not user them for anything else, such as
> > ordering purposes.
> > 
> > In systemd shutdown ordering is the inverse of start-up ordering,
> > and After= and Before= declare the latter. This means that if your
> > service has After=user.slice, this means at shutdown your service
> > will be stopped first and user.slice second.  
> Thanks for the clarification.
> 
> But this has not the expected impact. We were wishing with the
> "After=user.slice", that the stop sapRunning will occur before any
> user commands are stopped.
> 
> Does using "After=user.slice" propagate also on all the *childs*. That
> way we could ensure that our stop services' commmand is launched as
> the first ever before any kill ?
> 
> The question still remain for us, how can we do to have a daemon
> launched by hand, that can be handled by systemd for its stopping.

I think you could maybe use targets as synchronizations points... Maybe
make a target that starts after multi-user.target and requires it. Then
put your service as wanted by this new target (maybe also using after
and requires), let's call it sap-started.target. Then make that the
default target at boot.

That way, on shutdown, it should first stop and wait for
sap-started.target, and only then pull down the rest of the system.

But what's the purpose of stopping user processes only after this
service was stopped? This does make no sense to me. I'd instead define
proper dependencies.


-- 
Regards,
Kai

Replies to list-only preferred.


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Script in /usr/lib/systemd/system-shutdown not executed on init 6

2016-11-24 Thread Kai Krakow
Am Thu, 24 Nov 2016 10:39:52 +0100
schrieb Benoit SCHMID :

> Hello,
> 
> On 11/23/2016 03:52 PM, Reindl Harald wrote:
> > so why do you strip the whole context and rest of the response?
> >
> > what  is your exactly problem?
> >  
> I think my original question on this thread is clear.
> > it should be no rocket science to define a service, order starting
> > as needed which ensures at the same time that stop services happens
> > in the exactly reverse order
> >  
> It is more than rocket science to have the same behaviour
> for my SAP system behaviour on RH7 like on RH6.
> On RH6 I could boot my server.
> Then a few days after I could manually restart dedicated processes.
> On the next shutdown, I knew my processes would be cleanly stopped.
> 
> With RH7, my commands' executions need to be wrapped by systemd
> service. Otherwise, as L. Poettering said, these daemons are not
> stopped cleanly at shutdown.
> This what I have understood.
> 
> > maybe you should stop looking at old sysv scripts and just start
> > from scratch and define what you need in a proper systemd unit -
> > for many things you will find out that a lot of magic from init
> > scripts is not needed at all
> >  
> This is what I am trying to do :-)
> 
> > for me it ssems what you trying to to is wrap everything from a init
> > script in a native systemd-unit which is wrong from the start in
> > most cases   
> "trying to wrap everything from init script" is not my overall goal.
> My over goal is to administer my SAP systems on my new RH7 servers.
> 
> I have constraints from RH7 and from SAP and from Oracle DB.
> After reading that there is systemv systemd generator,
> I thought that I could have all the old functionalities with systemd.
> It is not the case.
> Therefore I am trying to find the best compromises to run my SAP
> systems. This implies understanding what I can do and what I cannot
> do with systemd. This is why I ask questions like the one on this
> thread.

What exactly is the init script in question doing, not generalizing it
to a simple "echo" and "touch"?

I had once an init script that did dirty user switch hackery to first
setup some things, then switch the user and start the service by using
"su".

This does not seem to work properly as "su" spawns a new session. Your
service executable is then no longer a real part of what systemd knows
about spawning the service. During shutdown, it cannot properly stop it
(actually, it simply ignores the PID from the su session) and will
eventually simply kill it later. That means, the core of your service
looses all stop dependencies defined and will, by definition, killed
too late at shutdown.

Even OpenRC did not properly stop this service when used in cgroup
mode. Using su simply switches to another cgroup as it looked to me and
the service wasn't sent signals properly.

Instead, you should leave the setup and user switching to systemd. That
means setting up directories for example with "RuntimeDirectory", or do
some other stuff with "ExecPre". You can switch the user with a native
systemd directive, even switching only for "Exec" but not "ExecPre".

I my case, I ported the cumbersome script to native systemd mechanics
with proper user switching and now everything went well and was solid.

The original script was something like a start-stop-daemon emulation,
exporting a simple start/stop interface to command line, thus every
init system could be easily wrapped around it. Its core daemon is a
closed source piece of software and the "init" script was just a
convenience compatbility layer to sysvinit systems - but due to its
implementation incompatible with cgroup-based init systems.

-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] how to run a script which takes about 30 seconds before shutdown

2016-11-12 Thread Kai Krakow
Am Thu, 10 Nov 2016 13:07:52 +0800
schrieb zerons :

> On 11/09/2016 09:43 PM, Andrei Borzenkov wrote:
> > On Wed, Nov 9, 2016 at 4:11 PM, zerons 
> > wrote:  
> >> Hi everyone.
> >>
> >> Everyday, I need to do something like `git pull` after system
> >> bootup and `git push` before shutdown. I am using Ubuntu 16.04.
> >> I have tried to put some script into /etc/rc0.d/, /etc/rc6.d/,
> >> each time the script runs, the network has been stopped, so I
> >> turn to systemd.
> >>
> >>
> >> === Here is a test .service file.
> >> [Unit]
> >> Description=test systemd
> >> Conflicts=reboot.target
> >> After=network-online.target
> >> Wants=network-online.target  
> > 
> > network-online.target itself does not do anything. You need some
> > service that actually does waiting, or at least orders itself
> > correctly on startup and shutdown. If you are using NetworkManager,
> > it is NetworkManager-wait-online.service. Is it enabled?
> >   
> No, that doesn't work. Then I realize the `ping` error message, that
> is not unknown host, so I put `ifconfig wlp9s0` into `test1.sh`, the
> result shows that at that moment, the net interface has already been
> shut off.
> 
> I change the "After=" and "Wants=" to
> "After=NetworkManager-wait-online.service"
> "Wants=NetworkManager-wait-online.service"
> and, yes, it is enabled.

Shouldn't it be "Requires" instead of "Wants" for your case? Your
service cannot run without network being online... "Wants" is just an
optional dependency. If network cannot be started, there's no point in
running your script.

If in consequence your service doesn't run at all then, the transaction
is incomplete.

You could also try switching to systemd-networkd instead. It provides
systemd-networkd-wait-online.service also. I'm not sure if it supports
wifi interfaces, tho.

> >> [Service]
> >> Type=oneshot
> >> RemainAfterExit=yes
> >> ExecStart=-/home/zerons/.bin/test.sh
> >> ExecStop=/home/zerons/.bin/test1.sh
> >>
> >> [Install]
> >> WantedBy=multi-user.target
> >>
> >>
> >> === and test.sh script, please ignore the destination
> >> #!/bin/bash
> >>
> >> echo "bootup" >> /home/zerons/.bin/test
> >> timeout 9 ping -c 1 www.baidu.com >/dev/null 2>&1
> >> while [ $? -ne 0 ]
> >> do
> >> sleep 1
> >> timeout 9 ping -c 1 www.baidu.com >/dev/null 2>&1
> >> done
> >>
> >> ping -c 4 www.baidu.com >> /home/zerons/.bin/test 2>&1
> >>
> >>
> >> === test1.sh
> >> #!/bin/bash
> >>
> >> echo "before shutdown"`date +%T` >> /home/zerons/.bin/test
> >> ping -c 8 www.baidu.com >>/home/zerons/.bin/test 2>&1
> >>
> >>
> >>
> >> === the result, on my laptop
> >> before shutdown20:04:35
> >> PING www.a.shifen.com (61.135.169.125) 56(84) bytes of data.
> >> 64 bytes from 61.135.169.125: icmp_seq=1 ttl=56 time=9.57 ms
> >> ping: sendmsg: Network is unreachable
> >> ...
> >> ping: sendmsg: Network is unreachable
> >>
> >> --- www.a.shifen.com ping statistics ---
> >> 8 packets transmitted, 1 received, 87% packet loss, time 7048ms
> >> rtt min/avg/max/mdev = 9.579/9.579/9.579/0.000 ms
> >>
> >>
> >>
> >> === some other infomation
> >> I reboot several times, but the `test1.sh` always got 87% packet
> >> loss,,,
> >>
> >> When I take these steps in a Ubuntu16.04 virtual machine, it works
> >> fine, the `test1.sh` gets 0% packet loss before shutdown. I also
> >> test on a laptop with SSD, `test1.sh` gets 87% packet loss.
> >>
> >> How could I make this work? Is there a way, when `test1.sh` runs,
> >> all the other services could not be stopped until `test1.sh`
> >> returns?
> >>
> >> ___
> >> systemd-devel mailing list
> >> systemd-devel@lists.freedesktop.org
> >> https://lists.freedesktop.org/mailman/listinfo/systemd-devel  
> ___
> systemd-devel mailing list
> systemd-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/systemd-devel



-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] rpcbind.socket failing

2016-11-01 Thread Kai Krakow
Am Tue, 1 Nov 2016 12:05:43 -0400
schrieb Steve Dickson :

> rpcbind.service: Failed at step EXEC spawning /usr/bin/rpcbind: No
> such file or directory

Do you still use DefaultDependencies=no?

Then /usr is probably not available that early (now that it can start
much earlier due to /run being available). What's the exercise of
disabling default dependencies anyway?

-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] rpcbind.socket failing

2016-10-31 Thread Kai Krakow
Am Mon, 31 Oct 2016 13:19:24 -0400
schrieb Steve Dickson :

> Upstream has come up with some new rpcbind service socket files
> and I'm trying to incorporate them into f25.
> 
> The rpcbind.socket is failing to come up
>rpcbind.socket: Failed to listen on sockets: No such file or
> directory Failed to listen on RPCbind Server Activation Socket.
> 
> But the rpcbind.socket does exist 
> # file /var/run/rpcbind.sock
> /var/run/rpcbind.sock: socket
> 
> and everything comes up when the daemon is started by hand.

I guess the problem is with the listen directives... See below.

> old rpcbind.socket file:
> 
> [Unit]
> Description=RPCbind Server Activation Socket
> 
> [Socket]
> ListenStream=/var/run/rpcbind.sock

Probably not your problem but it should really point to /run directly
because /var may not be mounted at that time. Even if /var/run is
usually a symlink, /var needs to be ready to resolve the symlink. But
otoh I'm not sure if systemd even supports /var being mounted later and
not by initramfs.

> [Install]
> WantedBy=sockets.target
> 
> 
> New rpcbind.socket file:
> 
> [Unit]
> Description=RPCbind Server Activation Socket
> DefaultDependencies=no
> RequiresMountsFor=/var/run /run
> Wants=rpcbind.target
> Before=rpcbind.target
> 
> [Socket]
> ListenStream=/var/run/rpcbind.sock
> 
> # RPC netconfig can't handle ipv6/ipv4 dual sockets
> BindIPv6Only=ipv6-only
> ListenStream=0.0.0.0:111
> ListenDatagram=0.0.0.0:111
> ListenStream=[::]:111
> ListenDatagram=[::]:111

I'm not sure, but I don't think you can combine BindIPv6Only and listen
on IPv4 sockets at the same time. What if you remove the two ipv4
style listen directives?

Maybe remove the complete block if you are going to use only unix
sockets.

> [Install]
> WantedBy=sockets.target
> 
> 
> ListenStream is the same for both files... 
> How do I debugging something like this?
> 
> tia,
> 
> steved.
> 
> 
> 
> ___
> systemd-devel mailing list
> systemd-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/systemd-devel



-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] BUG: Huge Problem: systemd does not mount all filesystems on Boot

2016-10-14 Thread Kai Krakow
Am Fri, 14 Oct 2016 19:58:13 +0300
schrieb Andrei Borzenkov :

> 14.10.2016 12:11, Juergen Sauer пишет:
> > Moin,
> > 
> > Systemd 231, Archlinux current
> > 
> > The concept is /home is to be mounted from nfs server. Works.
> > For performance reasons (email thunderbird, kde/plasma 5.8.x i.e odr
> > gnome are unusable if home is pure nfs) the user sub dirs .local,
> > .cache, .config, .thunderbird etc. are mounts from local fs (btrfs
> > subvolume).
> > 
> > Network is without NetWorkmanager (masked), fixed configured via
> > systemd-networkd.
> > 
> > The Problem is, after boot /home is mounted or not, random result.
> > WTF?!? 
> 
> local mounts are by default ordered before remote mounts, so you have
> dependency loop. Systemd resolves it by deleting one of units in this
> loop. Depending on what unit gets deleted you get different results. I
> do not know how deterministic algorithm is.

I'd create something like /home/local/leander which contains your local
directories. This also requires only one subvolume mount for exactly
this one directory. Then put symlinks into your /home/leander to the
local directories.

Since local mounts are strictly ordered before remote mounts, you can
be sure than the local directories are available when your nfs home dir
is mounted.

> > The BUG is:
> > The btrfs subvols are i.g. mounted - disobeying the
> > x-systemd.requires=home.mount rule.
> >   
> 
> Yes, systemdm will create mount point if it does not exist. I am still
> unsure whether this is a good thing.
> 
> ...
> 
> > 
> > 192.168.11.10:/home/home   nfs
> > nofail,x-systemd.device-timeout=1,x-systemd.requires=network-online.target
> > 0 0
> >   
> 
> systemd already adds dependency on network-online.target to network
> mounts. But do not forget that you need something that actually
> implements waiting for network, otherwise this requirement is
> effectively noop.
> 
> > LABEL=pc11root /home/leander/.cache   btrfs
> > ssd,ssd_spread,discard,compress=lzo,noatime,subvol=user/leander/cache,x-systemd.requires=home.mount
> > 0 0  
> 
> systemd adds equivalent to RequiresMountsFor=mount-point so
> x-systemd.requires here is redundant.
> ...
> 
> > 
> > 
> > How to fix?
> >  
> 
> Try adding _netdev to each mount that depends on /home to order them
> after network.


-- 
Regards,
Kai

Replies to list-only preferred.


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] proper way for shutdown script

2016-10-07 Thread Kai Krakow
Am Wed, 05 Oct 2016 14:40:52 +0200
schrieb Xen :

> Xen schreef op 05-10-2016 14:37:
> 
> > And this works. But now the service must be started first before it
> > will be called on shutdown... :-/.  
> 
> I guess the package installer would have to start the service after 
> installation which would be a solution in that sense, it needs to
> enable the service anyway.
> 
> Also I don't understand why /etc/init.d/libnss-ldap masks 
> /etc/systemd/system/libnss-ldap.service for the enable call.
> 
> It will see the init script and then call the sysv thing and will
> never get to the actual service file I have created?
> 
> Thanks for your help.

You could make a path unit that watches for changes to the
configuration file and then runs the script.


-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Auto-start of a Service in systemd

2016-10-06 Thread Kai Krakow
Am Wed, 5 Oct 2016 18:00:41 +0530
schrieb "Raghavendra. H. R" :

> Andrei,
> 
> Your doubt is absolutely correct. Default target of the system as
> nothing to do with auto start of services.
> 
> I checked both graphical.target & multi-user.target, surprisingly I
> don't see any big difference in these. Both of the files are almost
> same except multi-user.target have dependency *After=* with
> *rescue.service & rescue.target* which is restricting
> multi-user.target from starting.
> 
> However graphical.target don't depend on rescue services, so it is
> active & started. And by making graphical.target as dependency in my
> unit file solved my problem.
> 
> Hopefully if I remove the rescue services dependency from
> multi-user.target and add it as dependency then my service should
> come up without failures.

"After=" does not have such an impact. It won't block a service from
starting if the services in "After=" aren't started. It's just an
ordering dependency. If the dependents aren't enabled they are just
ignored. Instead, "Requires=" and "Wants=" give stronger dependencies.

You can check the status of multi-user.target:

$ systemctl status multi-user.target
● multi-user.target - Multi-User System
   Loaded: loaded (/usr/lib/systemd/system/multi-user.target; static; vendor 
preset: disabled)
   Active: active since Mo 2016-10-03 16:44:14 CEST; 3 days ago
 Docs: man:systemd.special(7)

Okt 03 16:44:14 jupiter.sol.local systemd[1]: Reached target Multi-User System.

$ systemctl get-default
graphical.target

As you see, multi-user.target has been pulled in for me. You can check
the order of targets started with:

$ systemd-analyze critical-chain
The time after the unit is active or started is printed after the "@" character.
The time the unit takes to start is printed after the "+" character.

graphical.target @5.383s
└─multi-user.target @5.383s
  └─machines.target @5.383s
└─systemd-nspawn@gentoo\x2delasticsearch\x2dbase.service @2.219s +3.164s
  └─network.target @2.204s
└─systemd-networkd.service @2.031s +172ms
  └─dbus.service @2.005s
└─basic.target @1.920s
  └─sockets.target @1.920s
└─docker.socket @1.896s +24ms
  └─sysinit.target @1.889s
└─systemd-timesyncd.service @1.649s +239ms
  └─systemd-tmpfiles-setup.service @1.576s +66ms
└─local-fs.target @1.575s
  └─run-user-500.mount @5.403s
└─local-fs-pre.target @565ms
  └─systemd-tmpfiles-setup-dev.service @383ms +179ms
└─kmod-static-nodes.service @148ms +42ms
  └─system.slice
└─-.slice

Also, take note that "After=" doesn't wait for a service to finish
its startup. Maybe, your service is just triggered way to early? You
may want to add "After=network.target" or similar synchronization
points of the graph above.

Make sure that after editing "WantedBy=" you may need to "systemctl
reenable" your service. If you didn't use "systemctl edit --full" you
may also need to use "systemctl daemon-reload" before re-enabling the
service. Otherwise you may see strange effects similar to what you
described.

> Thanks for your valuable feedback.
> 
> Regards,
> Raghavendra H R
> 
> 
> 
> --
> Regards,
> 
> Raghavendra. H. R
> (Raghu)
> 
> On Wed, Oct 5, 2016 at 4:55 PM, Andrei Borzenkov 
> wrote:
> 
> > On Wed, Oct 5, 2016 at 1:19 PM, Raghavendra. H. R
> >  wrote:  
> > > It's working fine now. We should give the default target of the
> > > system  
> > for  
> > > WantedBy= of the Install section.
> > > So I used graphical.target in the Install section and it fixed my
> > > issue. 
> >
> > I doubt it was the reason. grpahical.target pulls in
> > multi-user.target unless you have very customized unit definitions.


-- 
Regards,
Kai

Replies to list-only preferred.


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Is it possible to load unit files from paths other than default paths ?

2016-09-27 Thread Kai Krakow
Am Tue, 27 Sep 2016 06:33:40 +0300
schrieb Andrei Borzenkov <arvidj...@gmail.com>:

> 27.09.2016 05:10, Kai Krakow пишет:
> > Am Mon, 26 Sep 2016 14:30:37 +0530
> > schrieb "Raghavendra. H. R" <raghuh...@gmail.com>:
> >   
> >> Andrei,
> >>
> >> How to set SYSTEMD_UNIT_PATH in Systemd ?  
> > 
> > Maybe try "systemctl set-environment"? You may need to run
> > "systemctl daemon-reload" after this for the new unit files to pick
> > up. 
> 
> No, that does not work. It was already discussed previously. This is
> environment for services that are started by systemd, while you need
> to set it before starting systemd. This is challenging for something
> that runs as the very first process ever ... :)

If that stuff needs to run so isolated, one could try packaging it as
an nspawn container...


-- 
Regards,
Kai

Replies to list-only preferred.


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Is it possible to load unit files from paths other than default paths ?

2016-09-26 Thread Kai Krakow
Am Mon, 26 Sep 2016 14:30:37 +0530
schrieb "Raghavendra. H. R" :

> Andrei,
> 
> How to set SYSTEMD_UNIT_PATH in Systemd ?

Maybe try "systemctl set-environment"? You may need to run "systemctl
daemon-reload" after this for the new unit files to pick up.

BTW: Please stop top-posting... It's very inconvenient to follow your
messages with an NNTP reader.


> I checked about systemd source code in github. Please see this link
> 
> https://github.com/search?q=org%3Asystemd+systemd_unit_path=Code
> 
> Even in these source files they are doing getenv and setenv for
> SYSTEMD_UNIT_PATH. I dont see any conf file using which we can
> configure the environment variable.
> 
> 
> Regards,
> Raghu.
> 
> 
> --
> Regards,
> 
> Raghavendra. H. R
> (Raghu)
> 
> On Mon, Sep 26, 2016 at 1:59 PM, Andrei Borzenkov
>  wrote:
> 
> > On Mon, Sep 26, 2016 at 10:59 AM, Raghavendra. H. R
> >  wrote:  
> > > These are instructions which I tried.
> > >
> > > mkdir -p /BingoDast
> > > mount -t nfs -o nolock
> > > :/tftpboot/raghu/BingoDast /BingoDast
> > >
> > > export PATH=$PATH:/BingoDast/bin
> > > export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/BingoDast/lib
> > > export SYSTEMD_UNIT_PATH=/BingoDast/units
> > > echo $SYSTEMD_UNIT_PATH
> > > /BingoDast/units
> > >
> > > systemctl start bingod.service
> > > Failed to start bingod.service: Unit bingod.service failed to
> > > load: No  
> > such  
> > > file or directory.
> > >  
> >
> > SYSTEMD_UNIT_PATH has to be set in environment of systemd, not
> > systemctl. 
> > > Other options which I tried for setting SYSTEMD_UNIT_PATH are
> > > given  
> > below.  
> > >
> > > 1. Just gave the environment variable directly on the console
> > >  SYSTEMD_UNIT_PATH=/BingoDast/units
> > >
> > > 2. Gave the environment variable along with DefaultEnvinronment
> > > tag DefaultEnvironment=SYSTEMD_UNIT_PATH=/BingoDast/units
> > >
> > >
> > > Regards,
> > > Raghu
> > >
> > >
> > > --
> > > Regards,
> > >
> > > Raghavendra. H. R
> > > (Raghu)
> > >
> > > On Mon, Sep 26, 2016 at 1:07 PM, Andrei Borzenkov
> > >  wrote:  
>  [...]  
> > raghuh...@gmail.com>  
>  [...]  
>  [...]  
> > this  
>  [...]  
>  [...]  
>  [...]  
> > arvidj...@gmail.com>  
>  [...]  
>  [...]  
>  [...]  
>  [...]  
>  [...]  
>  [...]  
>  [...]  
> > path  
>  [...]  
> > library.  
>  [...]  
>  [...]  
>  [...]  
> > the  
>  [...]  
>  [...]  
> > case  
>  [...]  
>  [...]  
>  [...]  
> > >
> > >  
> >  
> 



-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Is it possible to load unit files from paths other than default paths ?

2016-09-23 Thread Kai Krakow
Am Thu, 22 Sep 2016 11:43:56 +0530
schrieb "Raghavendra. H. R" <raghuh...@gmail.com>:

> Thank you for the suggestions.
> But with this suggestion I need to run as user something like that.
> 
> In normal init.d systems, we have environment variables like PATH &
> LD_LIBRARY_PATH.
> No matter where I place my new executable or library, adding that
> path into these environment variables is enough to execute or link
> the library.
> 
> Probably this kind of facility is not available in Systemd init
> systems.

Then this is like asking how sysvinit could load files from other
locations than /etc/init.d. I don't think that is possible. To solve
this, one copies the init scripts from where they are to init.d and
then just enables them. Tip: systemd works the same.

Adding a location where init scripts reside to PATH is just not exactly
what would work during boot. You are just exploiting and side effect
of /etc/init.d being scripts and not pure configuration. Service files
for systemd are configuration. Actually, this trick won't solve startup
dependencies as sysvinit systems only look at that directory (or the
runlevel directories where you'd symlink your scripts which is what
you'd do in systemd, too: symlink the services).

If this is how you do it, create a generator which creates a service
files for each script in a directory. Gentoo has something
for /etc/local.d that works this way. Another way would be (if the copy
source is already systemd service files) to just copy those with the
generator. But that was already suggested.

Here's the template to get you going:
https://gitweb.gentoo.org/proj/gentoo-systemd-integration.git/tree/system-generators/gentoo-local-generator

It expects /etc/local.d/*.{start,stop} bash scripts. I think it is easy
to adapt for you.

> On Thu, Sep 22, 2016 at 12:34 AM, Kai Krakow <hurikha...@gmail.com>
> wrote:
> 
> > Am Wed, 21 Sep 2016 16:56:52 +0530
> > schrieb "Raghavendra. H. R" <raghuh...@gmail.com>:
> >  
> > > Hi,
> > >
> > > I'm newbie with systemd boot system and I need help in resolving
> > > one issue.
> > >
> > > I would like to create a service under a customized path Eg:/mnt
> > > and systemd should be able to pick my unit file from this.
> > >
> > > I tried by setting *Environment=SYSTEMD_UNIT_PATH=/mnt *from the
> > > console but it didnt help and found the error *"Failed to start
> > > startup.service: Unit startup.service failed to load: No such
> > > file or directory."*
> > >
> > >
> > > Is it possible to achieve this ?  
> >
> > Not sure if this helps you, i.e. is appropriate for your use-case...
> >
> > But if the directory happens to be a home directory and the services
> > are designed to be run as user, you could make the service files go
> > into $HOME/.config/systemd/user/ (or symlink this to your
> > mountpoint) and enable linger on the user (loginctl enable-linger
> > $USER).
> >
> > You can then manage these units as the user through "system --user
> > {start,stop,enable,...}" (only with real login sessions, not through
> > sudo -iu $USER, but ssh would work).
> >
> > --
> > Regards,
> > Kai
> >
> > Replies to list-only preferred.
> >
> > ___
> > systemd-devel mailing list
> > systemd-devel@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/systemd-devel
> >  
> 




-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Is it possible to load unit files from paths other than default paths ?

2016-09-21 Thread Kai Krakow
Am Wed, 21 Sep 2016 16:56:52 +0530
schrieb "Raghavendra. H. R" :

> Hi,
> 
> I'm newbie with systemd boot system and I need help in resolving one
> issue.
> 
> I would like to create a service under a customized path Eg:/mnt and
> systemd should be able to pick my unit file from this.
> 
> I tried by setting *Environment=SYSTEMD_UNIT_PATH=/mnt *from the
> console but it didnt help and found the error *"Failed to start
> startup.service: Unit startup.service failed to load: No such file or
> directory."*
> 
> 
> Is it possible to achieve this ?

Not sure if this helps you, i.e. is appropriate for your use-case...

But if the directory happens to be a home directory and the services
are designed to be run as user, you could make the service files go
into $HOME/.config/systemd/user/ (or symlink this to your mountpoint)
and enable linger on the user (loginctl enable-linger $USER).

You can then manage these units as the user through "system --user
{start,stop,enable,...}" (only with real login sessions, not through
sudo -iu $USER, but ssh would work).

-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] [229] systemd timers not triggered after resume

2016-09-06 Thread Kai Krakow
Hello!

I found that systemd no longer triggers the timer after waking the
system. I'm not sure since which version this is true because I
couldn't use suspend for a longer time due to driver bugs in the kernel.

# /etc/systemd/system/internal-backup.timer
[Unit]
Description=Daily Backup Timer

[Timer]
OnCalendar=03:00
WakeSystem=true

[Install]
WantedBy=timers.target

The system will wake up and go back to sleep after 2 hours later as
configured. But the associated service won't start:

$ systemctl list-timers
NEXT LEFTLAST PASSED
UNIT ACTIVATES
...
Mi 2016-08-31 03:00:00 CEST  19h leftn/a  n/a   
internal-backup.timerinternal-backup.service
...

9 timers listed.
Pass --all to see loaded but inactive timers, too.


It simply skips triggering and queues the service for the next
invocation.

Aug 30 03:00:35 jupiter.sol.local kernel: pcieport :00:1c.6: System wakeup 
enabled by ACPI
Aug 30 03:00:35 jupiter.sol.local kernel: ehci-pci :00:1d.0: System wakeup 
enabled by ACPI
Aug 30 03:00:35 jupiter.sol.local kernel: ehci-pci :00:1a.0: System wakeup 
enabled by ACPI
Aug 30 03:00:35 jupiter.sol.local kernel: PM: noirq suspend of devices complete 
after 11.418 msecs
Aug 30 03:00:35 jupiter.sol.local kernel: ACPI: Preparing to enter system sleep 
state S3
Aug 30 03:00:35 jupiter.sol.local kernel: PM: Saving platform NVS memory
Aug 30 03:00:35 jupiter.sol.local kernel: Disabling non-boot CPUs ...
Aug 30 03:00:35 jupiter.sol.local kernel: smpboot: CPU 1 is now offline
Aug 30 03:00:35 jupiter.sol.local kernel: smpboot: CPU 2 is now offline
Aug 30 03:00:35 jupiter.sol.local kernel: smpboot: CPU 3 is now offline
Aug 30 03:00:35 jupiter.sol.local kernel: ACPI: Low-level resume complete
Aug 30 03:00:35 jupiter.sol.local kernel: PM: Restoring platform NVS memory
Aug 30 03:00:35 jupiter.sol.local kernel: Enabling non-boot CPUs ...
Aug 30 03:00:35 jupiter.sol.local kernel: x86: Booting SMP configuration:
Aug 30 03:00:35 jupiter.sol.local kernel: smpboot: Booting Node 0 Processor 1 
APIC 0x2
Aug 30 03:00:35 jupiter.sol.local kernel:  cache: parent cpu1 should not be 
sleeping
Aug 30 03:00:35 jupiter.sol.local kernel: CPU1 is up
Aug 30 03:00:35 jupiter.sol.local kernel: smpboot: Booting Node 0 Processor 2 
APIC 0x4
Aug 30 03:00:35 jupiter.sol.local kernel:  cache: parent cpu2 should not be 
sleeping
Aug 30 03:00:35 jupiter.sol.local kernel: CPU2 is up
Aug 30 03:00:35 jupiter.sol.local kernel: smpboot: Booting Node 0 Processor 3 
APIC 0x6
Aug 30 03:00:35 jupiter.sol.local kernel:  cache: parent cpu3 should not be 
sleeping
Aug 30 03:00:35 jupiter.sol.local kernel: CPU3 is up
Aug 30 03:00:35 jupiter.sol.local kernel: ACPI: Waking up from system sleep 
state S3
Aug 30 03:00:35 jupiter.sol.local kernel: ehci-pci :00:1d.0: System wakeup 
disabled by ACPI
Aug 30 03:00:35 jupiter.sol.local kernel: pcieport :00:1c.6: System wakeup 
disabled by ACPI
Aug 30 03:00:35 jupiter.sol.local kernel: ehci-pci :00:1a.0: System wakeup 
disabled by ACPI
Aug 30 03:00:35 jupiter.sol.local kernel: PM: noirq resume of devices complete 
after 11.299 msecs
Aug 30 03:00:35 jupiter.sol.local kernel: PM: early resume of devices complete 
after 0.266 msecs
Aug 30 03:00:35 jupiter.sol.local kernel: rtc_cmos 00:02: System wakeup 
disabled by ACPI
Aug 30 03:00:35 jupiter.sol.local kernel: usb usb3: root hub lost power or was 
reset
Aug 30 03:00:35 jupiter.sol.local kernel: usb usb4: root hub lost power or was 
reset
Aug 30 03:00:35 jupiter.sol.local kernel: sd 3:0:0:0: [sdd] Starting disk
Aug 30 03:00:35 jupiter.sol.local kernel: sd 0:0:0:0: [sda] Starting disk
Aug 30 03:00:35 jupiter.sol.local kernel: sd 4:0:0:0: [sde] Starting disk
Aug 30 03:00:35 jupiter.sol.local kernel: sd 1:0:0:0: [sdb] Starting disk
Aug 30 03:00:35 jupiter.sol.local kernel: sd 2:0:0:0: [sdc] Starting disk
Aug 30 03:00:35 jupiter.sol.local kernel: usb 4-1: reset SuperSpeed USB device 
number 2 using xhci_hcd
Aug 30 03:00:35 jupiter.sol.local kernel: ata1: SATA link up 6.0 Gbps (SStatus 
133 SControl 300)
Aug 30 03:00:35 jupiter.sol.local kernel: ata1.00: ACPI cmd 
f5/00:00:00:00:00:00 (unknown) filtered out
Aug 30 03:00:35 jupiter.sol.local kernel: ata1.00: ACPI cmd 
b1/c1:00:00:00:00:00 (unknown) filtered out
Aug 30 03:00:35 jupiter.sol.local kernel: ata1.00: supports DRM functions and 
may not be fully accessible
Aug 30 03:00:35 jupiter.sol.local kernel: ata1.00: ACPI cmd 
f5/00:00:00:00:00:00 (unknown) filtered out
Aug 30 03:00:35 jupiter.sol.local kernel: ata1.00: ACPI cmd 
b1/c1:00:00:00:00:00 (unknown) filtered out
Aug 30 03:00:35 jupiter.sol.local kernel: ata1.00: supports DRM functions and 
may not be fully accessible
Aug 30 03:00:35 jupiter.sol.local kernel: ata1.00: configured for UDMA/133
Aug 30 03:00:35 jupiter.sol.local kernel: PM: noirq resume of devices complete 
after 11.299 msecs
Aug 30 03:00:35 jupiter.sol.local 

[systemd-devel] [229] systemd timers not triggered after resume

2016-09-06 Thread Kai Krakow
Hello!

I found that systemd no longer triggers the timer after waking the
system. I'm not sure since which version this is true because I
couldn't use suspend for a longer time due to driver bugs in the kernel.

# /etc/systemd/system/internal-backup.timer
[Unit]
Description=Daily Backup Timer

[Timer]
OnCalendar=03:00
WakeSystem=true

[Install]
WantedBy=timers.target

The system will wake up and go back to sleep after 2 hours later as
configured. But the associated service won't start:

$ systemctl list-timers
NEXT LEFTLAST
PASSEDUNIT ACTIVATES ...
Mi 2016-08-31 03:00:00 CEST  19h leftn/a
n/a   internal-backup.timerinternal-backup.service ...

9 timers listed.
Pass --all to see loaded but inactive timers, too.


It simply skips triggering and queues the service for the next
invocation.

Aug 30 03:00:35 jupiter.sol.local kernel: pcieport :00:1c.6: System
wakeup enabled by ACPI Aug 30 03:00:35 jupiter.sol.local kernel:
ehci-pci :00:1d.0: System wakeup enabled by ACPI Aug 30 03:00:35
jupiter.sol.local kernel: ehci-pci :00:1a.0: System wakeup enabled
by ACPI Aug 30 03:00:35 jupiter.sol.local kernel: PM: noirq suspend of
devices complete after 11.418 msecs Aug 30 03:00:35 jupiter.sol.local
kernel: ACPI: Preparing to enter system sleep state S3 Aug 30 03:00:35
jupiter.sol.local kernel: PM: Saving platform NVS memory Aug 30
03:00:35 jupiter.sol.local kernel: Disabling non-boot CPUs ... Aug 30
03:00:35 jupiter.sol.local kernel: smpboot: CPU 1 is now offline Aug 30
03:00:35 jupiter.sol.local kernel: smpboot: CPU 2 is now offline Aug 30
03:00:35 jupiter.sol.local kernel: smpboot: CPU 3 is now offline Aug 30
03:00:35 jupiter.sol.local kernel: ACPI: Low-level resume complete Aug
30 03:00:35 jupiter.sol.local kernel: PM: Restoring platform NVS memory
Aug 30 03:00:35 jupiter.sol.local kernel: Enabling non-boot CPUs ...
Aug 30 03:00:35 jupiter.sol.local kernel: x86: Booting SMP
configuration: Aug 30 03:00:35 jupiter.sol.local kernel: smpboot:
Booting Node 0 Processor 1 APIC 0x2 Aug 30 03:00:35 jupiter.sol.local
kernel:  cache: parent cpu1 should not be sleeping Aug 30 03:00:35
jupiter.sol.local kernel: CPU1 is up Aug 30 03:00:35 jupiter.sol.local
kernel: smpboot: Booting Node 0 Processor 2 APIC 0x4 Aug 30 03:00:35
jupiter.sol.local kernel:  cache: parent cpu2 should not be sleeping
Aug 30 03:00:35 jupiter.sol.local kernel: CPU2 is up Aug 30 03:00:35
jupiter.sol.local kernel: smpboot: Booting Node 0 Processor 3 APIC 0x6
Aug 30 03:00:35 jupiter.sol.local kernel:  cache: parent cpu3 should
not be sleeping Aug 30 03:00:35 jupiter.sol.local kernel: CPU3 is up
Aug 30 03:00:35 jupiter.sol.local kernel: ACPI: Waking up from system
sleep state S3 Aug 30 03:00:35 jupiter.sol.local kernel: ehci-pci
:00:1d.0: System wakeup disabled by ACPI Aug 30 03:00:35
jupiter.sol.local kernel: pcieport :00:1c.6: System wakeup disabled
by ACPI Aug 30 03:00:35 jupiter.sol.local kernel: ehci-pci
:00:1a.0: System wakeup disabled by ACPI Aug 30 03:00:35
jupiter.sol.local kernel: PM: noirq resume of devices complete after
11.299 msecs Aug 30 03:00:35 jupiter.sol.local kernel: PM: early resume
of devices complete after 0.266 msecs Aug 30 03:00:35 jupiter.sol.local
kernel: rtc_cmos 00:02: System wakeup disabled by ACPI Aug 30 03:00:35
jupiter.sol.local kernel: usb usb3: root hub lost power or was reset
Aug 30 03:00:35 jupiter.sol.local kernel: usb usb4: root hub lost power
or was reset Aug 30 03:00:35 jupiter.sol.local kernel: sd 3:0:0:0:
[sdd] Starting disk Aug 30 03:00:35 jupiter.sol.local kernel: sd
0:0:0:0: [sda] Starting disk Aug 30 03:00:35 jupiter.sol.local kernel:
sd 4:0:0:0: [sde] Starting disk Aug 30 03:00:35 jupiter.sol.local
kernel: sd 1:0:0:0: [sdb] Starting disk Aug 30 03:00:35
jupiter.sol.local kernel: sd 2:0:0:0: [sdc] Starting disk Aug 30
03:00:35 jupiter.sol.local kernel: usb 4-1: reset SuperSpeed USB device
number 2 using xhci_hcd Aug 30 03:00:35 jupiter.sol.local kernel: ata1:
SATA link up 6.0 Gbps (SStatus 133 SControl 300) Aug 30 03:00:35
jupiter.sol.local kernel: ata1.00: ACPI cmd f5/00:00:00:00:00:00
(unknown) filtered out Aug 30 03:00:35 jupiter.sol.local kernel:
ata1.00: ACPI cmd b1/c1:00:00:00:00:00 (unknown) filtered out Aug 30
03:00:35 jupiter.sol.local kernel: ata1.00: supports DRM functions and
may not be fully accessible Aug 30 03:00:35 jupiter.sol.local kernel:
ata1.00: ACPI cmd f5/00:00:00:00:00:00 (unknown) filtered out Aug 30
03:00:35 jupiter.sol.local kernel: ata1.00: ACPI cmd
b1/c1:00:00:00:00:00 (unknown) filtered out Aug 30 03:00:35
jupiter.sol.local kernel: ata1.00: supports DRM functions and may not
be fully accessible Aug 30 03:00:35 jupiter.sol.local kernel: ata1.00:
configured for UDMA/133 Aug 30 03:00:35 jupiter.sol.local kernel: PM:
noirq resume of devices complete after 11.299 msecs Aug 30 03:00:35
jupiter.sol.local kernel: PM: early resume of devices complete after
0.266 msecs Aug 30 03:00:35 

[systemd-devel] [229] systemd timers not triggered after resume

2016-08-30 Thread Kai Krakow
Hello!

I found that systemd no longer triggers the timer after waking the
system. I'm not sure since which version this is true because I
couldn't use suspend for a longer time due to driver bugs in the kernel.

# /etc/systemd/system/internal-backup.timer
[Unit]
Description=Daily Backup Timer

[Timer]
OnCalendar=03:00
WakeSystem=true

[Install]
WantedBy=timers.target

The system will wake up and go back to sleep after 2 hours later as
configured. But the associated service won't start:

$ systemctl list-timers
NEXT LEFTLAST PASSED
UNIT ACTIVATES
...
Mi 2016-08-31 03:00:00 CEST  19h leftn/a  n/a   
internal-backup.timerinternal-backup.service
...

9 timers listed.
Pass --all to see loaded but inactive timers, too.


It simply skips triggering and queues the service for the next
invocation.

Aug 30 03:00:35 jupiter.sol.local kernel: pcieport :00:1c.6: System wakeup 
enabled by ACPI
Aug 30 03:00:35 jupiter.sol.local kernel: ehci-pci :00:1d.0: System wakeup 
enabled by ACPI
Aug 30 03:00:35 jupiter.sol.local kernel: ehci-pci :00:1a.0: System wakeup 
enabled by ACPI
Aug 30 03:00:35 jupiter.sol.local kernel: PM: noirq suspend of devices complete 
after 11.418 msecs
Aug 30 03:00:35 jupiter.sol.local kernel: ACPI: Preparing to enter system sleep 
state S3
Aug 30 03:00:35 jupiter.sol.local kernel: PM: Saving platform NVS memory
Aug 30 03:00:35 jupiter.sol.local kernel: Disabling non-boot CPUs ...
Aug 30 03:00:35 jupiter.sol.local kernel: smpboot: CPU 1 is now offline
Aug 30 03:00:35 jupiter.sol.local kernel: smpboot: CPU 2 is now offline
Aug 30 03:00:35 jupiter.sol.local kernel: smpboot: CPU 3 is now offline
Aug 30 03:00:35 jupiter.sol.local kernel: ACPI: Low-level resume complete
Aug 30 03:00:35 jupiter.sol.local kernel: PM: Restoring platform NVS memory
Aug 30 03:00:35 jupiter.sol.local kernel: Enabling non-boot CPUs ...
Aug 30 03:00:35 jupiter.sol.local kernel: x86: Booting SMP configuration:
Aug 30 03:00:35 jupiter.sol.local kernel: smpboot: Booting Node 0 Processor 1 
APIC 0x2
Aug 30 03:00:35 jupiter.sol.local kernel:  cache: parent cpu1 should not be 
sleeping
Aug 30 03:00:35 jupiter.sol.local kernel: CPU1 is up
Aug 30 03:00:35 jupiter.sol.local kernel: smpboot: Booting Node 0 Processor 2 
APIC 0x4
Aug 30 03:00:35 jupiter.sol.local kernel:  cache: parent cpu2 should not be 
sleeping
Aug 30 03:00:35 jupiter.sol.local kernel: CPU2 is up
Aug 30 03:00:35 jupiter.sol.local kernel: smpboot: Booting Node 0 Processor 3 
APIC 0x6
Aug 30 03:00:35 jupiter.sol.local kernel:  cache: parent cpu3 should not be 
sleeping
Aug 30 03:00:35 jupiter.sol.local kernel: CPU3 is up
Aug 30 03:00:35 jupiter.sol.local kernel: ACPI: Waking up from system sleep 
state S3
Aug 30 03:00:35 jupiter.sol.local kernel: ehci-pci :00:1d.0: System wakeup 
disabled by ACPI
Aug 30 03:00:35 jupiter.sol.local kernel: pcieport :00:1c.6: System wakeup 
disabled by ACPI
Aug 30 03:00:35 jupiter.sol.local kernel: ehci-pci :00:1a.0: System wakeup 
disabled by ACPI
Aug 30 03:00:35 jupiter.sol.local kernel: PM: noirq resume of devices complete 
after 11.299 msecs
Aug 30 03:00:35 jupiter.sol.local kernel: PM: early resume of devices complete 
after 0.266 msecs
Aug 30 03:00:35 jupiter.sol.local kernel: rtc_cmos 00:02: System wakeup 
disabled by ACPI
Aug 30 03:00:35 jupiter.sol.local kernel: usb usb3: root hub lost power or was 
reset
Aug 30 03:00:35 jupiter.sol.local kernel: usb usb4: root hub lost power or was 
reset
Aug 30 03:00:35 jupiter.sol.local kernel: sd 3:0:0:0: [sdd] Starting disk
Aug 30 03:00:35 jupiter.sol.local kernel: sd 0:0:0:0: [sda] Starting disk
Aug 30 03:00:35 jupiter.sol.local kernel: sd 4:0:0:0: [sde] Starting disk
Aug 30 03:00:35 jupiter.sol.local kernel: sd 1:0:0:0: [sdb] Starting disk
Aug 30 03:00:35 jupiter.sol.local kernel: sd 2:0:0:0: [sdc] Starting disk
Aug 30 03:00:35 jupiter.sol.local kernel: usb 4-1: reset SuperSpeed USB device 
number 2 using xhci_hcd
Aug 30 03:00:35 jupiter.sol.local kernel: ata1: SATA link up 6.0 Gbps (SStatus 
133 SControl 300)
Aug 30 03:00:35 jupiter.sol.local kernel: ata1.00: ACPI cmd 
f5/00:00:00:00:00:00 (unknown) filtered out
Aug 30 03:00:35 jupiter.sol.local kernel: ata1.00: ACPI cmd 
b1/c1:00:00:00:00:00 (unknown) filtered out
Aug 30 03:00:35 jupiter.sol.local kernel: ata1.00: supports DRM functions and 
may not be fully accessible
Aug 30 03:00:35 jupiter.sol.local kernel: ata1.00: ACPI cmd 
f5/00:00:00:00:00:00 (unknown) filtered out
Aug 30 03:00:35 jupiter.sol.local kernel: ata1.00: ACPI cmd 
b1/c1:00:00:00:00:00 (unknown) filtered out
Aug 30 03:00:35 jupiter.sol.local kernel: ata1.00: supports DRM functions and 
may not be fully accessible
Aug 30 03:00:35 jupiter.sol.local kernel: ata1.00: configured for UDMA/133
Aug 30 03:00:35 jupiter.sol.local kernel: PM: noirq resume of devices complete 
after 11.299 msecs
Aug 30 03:00:35 jupiter.sol.local 

Re: [systemd-devel] Emulate two cron tab entries to start/stop service unit natively?

2016-08-02 Thread Kai Krakow
Am Mon, 1 Aug 2016 23:59:13 + (UTC)
schrieb John :

> Is it possible to use a systemd timer unit to start and stop a
> service unit according to set times of the day?  In my case,
> openvpn.service is a forking type if that matters. I can do this
> using cron, but am wondering if/how to do it with systemd natively.
> 
> In cron terms, one could do this like so:
> # start at 7 AM
> * 7 * * * systemctl start openvpn.service
> 
> 
> # stop at 5 PM
> * 17 * * * systemctl stop openvnp.service
> 
> The syntax of the timer with differential commands (ie start the
> service at 7 AM and stop it at 5 PM) isn't clear to me even after
> consulting `man systemd.time` and `man systemd.timer`.

Create to additional services, openvpn-start.service and
openvpn-stop.service, which each require openvpn.service to start or
stop (Wants and Conflicts should work). Those two services should be of
type one-shot, so they start once and quit without error. They contain
no exec lines.

Now create two timer units, openvpn-{start,stop}.timer with appropriate
time definitions and enable those. All other units shouldn't be enabled.

-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Question about changing systemd target during boot

2016-08-01 Thread Kai Krakow
Am Mon, 1 Aug 2016 16:09:36 +0300
schrieb Svetoslav Iliev :

> Hi guys,
> 
> Thank you for the prompt reply and your valuable input. Just to let
> you know - I was able to do exactly what I intended. As it turns out
> my mistake was indeed creating contradiction between the WantedBy and
> After sections. Once I introduced a new "change.target" and adjusted
> my services accordingly I was able to isolate successfully either A
> or B targets during boot.
> 
> I also had to split the services in two: one main blocking of type 
> oneshot and one non-blocking of simple type just to switch the
> target. As it seems I cannot call systemctl isolate from onehost type
> of service.

Wouldn't it be easier to simply make your change.target the default
boot target, depending on network-online.target and a service to start
the target you need instead of isolate? Otherwise multi-user.target
starts services you are going to stop just a blink later by using
isolate.

> I just like to say that I followed this guide: 
> https://www.freedesktop.org/wiki/Software/systemd/NetworkTarget where
> I quote "/Alternatively, you can change your service that needs the 
> network to be up, to include After=network-online.target and 
> Wants=network-online.target./"
> 
> Once again thanks all for the help.
> 
> ---
> BR,
> 
> Swetli
> 
> On 08/01/2016 03:38 PM, Andrei Borzenkov wrote:
> > On Mon, Aug 1, 2016 at 2:43 PM, Michael Chapman
> >  wrote:  
> >> On Mon, 1 Aug 2016, Andrei Borzenkov wrote:  
>  [...]  
>  [...]  
>  [...]  
>  [...]  
>  [...]  
>  [...]  
> >>
> >> I just checked the code, and it looks like systemd explicitly
> >> *skips* these default dependencies if they would create a loop. In
> >> target_add_default_dependencies:
> >>  
> > Yes, of course. It is also described in manual. But the question is
> > what user actually intended? It is more topic of good design.
> > ___
> > systemd-devel mailing list
> > systemd-devel@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/systemd-devel  
> 
> 



-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Zero downtime restart of Web application (Nginx, Puma, JRuby)

2016-07-15 Thread Kai Krakow
Am Sat, 18 Jun 2016 13:56:03 +0200
schrieb Paul Menzel :

> Dear systemd folks,
> 
> 
> the setup is as follows.
> 
> Nginx is used as the Web server, which communicates with a Ruby on
> Rails application over a socket. Puma [1] is used as the application
> server.
> 
> Nginx and Puma are managed by systemd service files.
> 
> If a new version of the application is installed, the goal is that
> there is no downtime. That means, until the application with the new
> code isn’t ready yet to serve requests, the old application still
> answers them, that means, not only are no requests lost, but during
> restart the visitor does not need to wait any longer than normally.
> 
> In this case, JRuby is used, which means that application start, `sudo
> systemctl start web-application.service` takes more than 30 seconds.
> 
> So, `sudo systemctl restart web-application.service` is not enough as
> Puma takes some time to be started. (The socket activation described
> in the Puma systemd documentation [2] only takes care, that no
> requests are lost. The visitor still has to wait.)
> 
> Is that possible by just using systemd, or is a load balancer like
> HAProxy or a special NGINX configuration and service file templates
> needed?
> 
> My current thinking would be, that a service file template for the
> Puma application server is used. The new code is then started in
> parallel, and if it’s done, it “takes over the socket”. (No idea if
> NGINX would need to be restarted.) Then the old version of the code
> is stopped. (I don’t know how to model that last step/dependency.)
> 
> What drawbacks does the above method have? Is it implementable?
> 
> How do you manage such a setup?

This is not really systemd specific. Systemd solves zero-downtime only
by socket-activation which is not exactly what you expect.

We are using mod_passenger with nginx to provider zero-downtime during
application deployment, background workers (job servers etc) are
managed by systemd and socket activation.

Passenger however does not support jruby afaik. But you may want to
look at how they implement zero downtime: Its kind of a proxy which is
switched to the new application instance as soon as it is up and
running. You could do something similar: Deploy to a new installation,
when ready, rewrite your nginx config to point to the new instance,
reload nginx, then gracefully stop the old instance. Its not that well
integrated as passenger does it but it can be optimized. However,
nothing of this is systemd specific except you still may want to use
socket activation in some places. Stopping and starting instances and
reloading nginx should be part of your deployment process. If controlled
with systemd, you can use service templates like
my-application@.service and then start
my-application@instance-201607150001.service and stop
my-application@instance-201607140003.service. You can use the instance
name to setup sockets, proxies etc.

-- 
Regards,
Kai

Replies to list-only preferred.


pgphNUoKsRqYy.pgp
Description: Digitale Signatur von OpenPGP
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Adding a Timer section to .service files

2016-07-09 Thread Kai Krakow
Am Fri, 8 Jul 2016 19:34:48 +0300
schrieb One Infinite Loop <6po...@gmail.com>:

> As I said before, I don't want to replace .service+.timer
> combination. I just think there are cases when .service file
> (containing, for example, ExecStart followed by many ExecStartPost)
> can have a [Crontab] section with .timer syntax. The two formats
> (service+timer and [Crontab] inside a service file) can coexist. It's
> just a suggestion.

It won't work that way. Because you need to enable timers. If the timer
is enabled, it will trigger the service by the same name (per default,
this can be configured). That service itself is not enabled - it just
sits there and will be started by the timer.

If you would enable said service, it would run at boot. Combining timer
and service into one file would thus just run the service at boot - I
don't think this is your intention.

If anything makes sense at all, then it would probably be to allow
ExecStart* within timer units - but that would only make sense for
Type=oneshot items - otherwise such a "timer" would fire once and stay
active.

Timer and service are split for good reason. It makes stuff so much
simpler from systemd view and much more flexible from admin view.

PS: Please do not top-post.

> On Fri, Jul 8, 2016 at 7:16 PM, Lennart Poettering
>  wrote:
> 
> > On Fri, 08.07.16 16:35, One Infinite Loop (6po...@gmail.com) wrote:
> >  
> > > A few usecases:
> > > 1) I want to delete specific files once a day  
> >
> > For this you probably should be using tmpfiles' "aging" logic, and
> > not define your own timer.
> >  
> > > 2)I want to free RAM using sync command and `echo 3 >
> > > /proc/sys/vm/drop_caches` every 15 seconds  
> >
> > Uh. Oh. I don't see why anyone would want to do this...
> >  
> > > 3)I want to make sure certain processes always run using a
> > > specific nice value like -15. I know control groups are invented
> > > but it's not the same thing.  
> >
> > Doing this with a service timer appears very strange to me. Simply
> > set "Nice=-15" in the unit file starting your service and the nice
> > level will be properly inherited by all processes of your services.
> >
> > But, in general, you could do all of the above with a combination of
> > .timer and .service file just fine already. These usecases are
> > perfectly covered, the only difference between what you are
> > proposing and what has been implemented is whether it's adding two
> > unit files per item instead of one, which I don't think is too
> > bad...
> >
> > Lennart


-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] How to set a limit for mounting roofs?

2016-07-06 Thread Kai Krakow
Am Wed, 6 Jul 2016 06:21:17 +0200
schrieb Lennart Poettering :

> On Mon, 04.07.16 12:32, Chris Murphy (li...@colorremedies.com) wrote:
> 
> > I have a system where I get an indefinite
> > 
> > "A start job is running for dev-vda2.device (xmin ys / no limit)"
> > 
> > Is there a boot parameter to use to change the no limit to have a
> > limit? rd.timeout does nothing. When I use rd.break=pre-mount and
> > use blkid, /dev/vda2 is there and can be mounted with the same mount
> > options as specified with rootflags= on the command line. So I'm not
> > sure why it's hanging indefinitely. I figure with
> > systemd.log_level=debug and rd.debug and maybe rd.udev.debug I
> > should be able to figure out why this is failing to mount.  
> 
> It should be possible to use x-systemd.device-timeout= among the mount
> options in rootflags= to alter the timeout for the device.
> 
> But what precisely do you expect to happen in this case? You'd simply
> choose between "waits forever" and "fails after some time"... A
> missing root device is hardly something you can just ignore and
> proceed...

I think a degraded btrfs is not actually a missing rootfs. Systemd
tries to decide between black and white here - but btrfs also knows
gray. And I don't mean that systemd should incorporate something to
resolve or announce this issue - that's a UI problem: If the device pool
is degraded, it's the UI that should tell the user. I think, some time
in the future btrfs may automatically fall back to degraded mounts just
like software and hardware raid do. Systemd also doesn't decide not to
boot in that case (raid) and wait forever for a device that's not going
to appear. The problem currently is just that btrfs doesn't go degraded
automatically (for reasons that have been outlined in the btrfs list) -
systemd apparently should have a way to work around this. The degraded
case for btrfs is already covered by the fact that you need to supply
degraded to rootflags on the kernel cmdline - otherwise mounting will
fail anyways, no matter if systemd had a workaround or not. So the UI
part is already covered more or less.

I don't think that incorporating rootflags into "btrfs ready" decision
is going to work. And as I understand, using device-timeout will just
turn out as a missing rootfs after timeout and the degraded fs won't be
marked as ready by it. So btrfs maybe needs a special timeout handling
for "btrfs ready", as I wrote in the other post of this thread.

-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] How to set a limit for mounting roofs?

2016-07-06 Thread Kai Krakow
Am Wed, 6 Jul 2016 06:26:03 +0200
schrieb Lennart Poettering :

> On Tue, 05.07.16 14:00, Chris Murphy (li...@colorremedies.com) wrote:
> 
> > On Tue, Jul 5, 2016 at 12:45 PM, Chris Murphy
> >  wrote:  
> > > OK it must be this.
> > >
> > > :/# cat /usr/lib/udev/rules.d/64-btrfs.rules
> > > # do not edit this file, it will be overwritten on update
> > >
> > > SUBSYSTEM!="block", GOTO="btrfs_end"
> > > ACTION=="remove", GOTO="btrfs_end"
> > > ENV{ID_FS_TYPE}!="btrfs", GOTO="btrfs_end"
> > >
> > > # let the kernel know about this btrfs filesystem, and check if
> > > it is complete IMPORT{builtin}="btrfs ready $devnode"
> > >
> > > # mark the device as not ready to be used by the system
> > > ENV{ID_BTRFS_READY}=="0", ENV{SYSTEMD_READY}="0"
> > >
> > > LABEL="btrfs_end"  
> > 
> > Yep.
> > https://lists.freedesktop.org/archives/systemd-commits/2012-September/002503.html
> > 
> > The problem is that with rootflags=degraded it still indefinitely
> > hangs. And even without the degraded option, I don't think the
> > indefinite hang waiting for missing devices is the best way to find
> > out there's been device failures. I think it's better to fail to
> > mount, and end up at a dracut shell.  
> 
> I figure it would be OK to merge a patch that makes the udev rules
> above set SYSTEMD_READY immediately if the device popped up in case
> some new kernel command line option is set.
> 
> Hooking up rootflags=degraded with this is unlikely to work I fear, as
> by the time the udev rules run we have no idea yet what systemd wants
> to make from the device in the end. That means knowing this early the
> fact that system wants to mount it as root disk with some specific
> mount options is not really sensible in the design...

A possible solution could be to fall back to simply announce
SYSTEMD_READY=1 after a sensible timeout instead of waiting forever for
a situation that is unlikely to happen. That way, the system would boot
slowly in the degraded case but it would continue to boot. If something
goes wrong now, it would probably fall back to rescue shell, right?

-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Failed to restart ntpd

2016-05-21 Thread Kai Krakow
Am Thu, 12 May 2016 11:51:13 +0200
schrieb Reindl Harald :

> Am 12.05.2016 um 11:46 schrieb liuxueping:
> > Before i restart ntpd,ntpd process was running:
> > ntp   3993  0.0  0.0   7404  4156 ?Ss   10:21   0:00
> > /usr/sbin/ntpd -u ntp:ntp -g
> > root  3995  0.0  0.0   7404  2364 ?S10:21   0:00
> > /usr/sbin/ntpd -u ntp:ntp -g
> > so,it should be killed by systemctl and restart a new ntpd
> > process,but it failed,i want to know systemd how to judge that a
> > process is killed completed to start a new service.  
> 
> again: systemd monitors all processes part of a service
> systemctl itself does nothing, it just invokes commands
> 
> when you manually started a ntpd process systemd don't know it should
> be killed and *it should not* get killed just because "systemctl
> restart ntpd"
> 
> so when there is a ntpd process which is not listed in "systemctl 
> status" you or something has manually fired up that process - don't
> do that at all - and you need to kill it the same way

This situation may also happen, if shell scripts provide services and
try to do funny things like "su - $user" to switch to another user. This
creates a new session, PIDs started there seem not to be tracked by
systemd as part of the service. Reliable teardown by systemd is then
broken.

I had this problem with dccifd whose scripts try to be extraordinary
smart and reinvent the wheel.


-- 
Regards,
Kai

Replies to list-only preferred.


pgpGLnGmNJexQ.pgp
Description: Digitale Signatur von OpenPGP
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Is there a way to see if a script is run from systemd

2016-05-08 Thread Kai Krakow
Am Sun, 8 May 2016 13:05:34 +0200
schrieb Reindl Harald :

> Am 07.05.2016 um 15:00 schrieb Cecil Westerhof:
> > I have written a Bash script to be used for a service. Is it
> > possible to see in the script if it is run from systemd? I could
> > use this for debugging purposes  
> 
> just set a environment variable in the systemd unit or check against
> a lot of env-vars missing which are there in a ordinary shell but
> removed from systemd to start with a clean anvironement

Probably it would also work to look at $PPID which should be 1 if
running under systemd - I have not tried that, tho.

But keep in mind that this may not be systemd-specific. And, if you
daemonize, the parent may switch to 1.

-- 
Regards,
Kai

Replies to list-only preferred.


pgpDoDKHQQHaW.pgp
Description: Digitale Signatur von OpenPGP
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] From command-line 'systemctl suspend' works' from cron it does not

2016-03-22 Thread Kai Krakow
Am Mon, 21 Mar 2016 09:21:39 +0100
schrieb Cecil Westerhof :

> When executing
> systemctl suspend || echo "Error code: ${?}"
> from the command-line it outputs
> Error code: 1
> and it puts my machine in suspend.
> 
> When putting it in cron it gives the following errors:
> Failed to execute operation: Access denied
> Failed to start suspend.target: Access denied
> and gives the output:
> Error code: 4
> 
> What is happening here? Is it possible to run 'systemctl suspend' from
> cron, or is there a reason why this is not possible?

It's probably because cron doesn't setup a systemd session. Do you
eventually run from a user crontab? Have you tried running from a root
crontab?

-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] How to depend a user session service on a system service?

2016-03-09 Thread Kai Krakow
Hello!

I have some user session services running (executing Rails background
workers, e.g. sidekiq etc.). On one busy/bigger server, when booting
the machine, these user sessions fail to start up due to timeouts.

Actually, it feels wrong to just bump up the timeouts. I'd rather make
them depend on the mysqld service (which is the slowest to start) and
redis (it makes no sense without it) but that actually fails on me:
Trying to start the user service now says that the service is unknown:


$ systemctl --user cat sidekiq@invitation_tool.service
# /etc/systemd/user/sidekiq@.service
[Unit]
Description=sidekiq for %i
After=redis.service
Requires=redis.service

[Service]
Type=simple
Environment=RAILS_ENV=production
WorkingDirectory=%h/rails-apps/%i/current
ExecStart=/usr/bin/env bin/sidekiq -C config/sidekiq.yml
ExecStop=/bin/kill -TERM $MAINPID
TimeoutStartSec=300s
Restart=on-failure

[Install]
WantedBy=default.target


So it looks like user session deps are completely isolated from the
system dependencies namespace. How do I circumvent this? I'd like to
depend user session services on system services.

Optimally this could be solved by proper socket activation but
apparently most service are still far from supporting it.

-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd output

2016-03-02 Thread Kai Krakow
Am Wed, 2 Mar 2016 11:41:32 -0800
schrieb William Taylor :

> If you are starting/stopping  a service manually, is it possible to
> see its output as it's running?
> 
> For example if I have a process that takes some time to start and
> I want to periodically output something to the current users terminal.
> 
> ___
> systemd-devel mailing list
> systemd-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/systemd-devel

journalctl -fu $YOURSERVICE & systemctl restart $YOURSERVICE

-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] nspawn networkd could not find udev device

2016-03-02 Thread Kai Krakow
Am Wed, 2 Mar 2016 16:53:01 +0800 (CST)
schrieb kennedy :

> Hi
> 
> 
> On my first run systemd-nspawn everything that's OK.
> When I shutdown the nspawn container, and re-run systemd-nspawn
> again, this error comes. In container, it's only have 169.254.x.x
> network, didn't have network bridge's network.
> 
> On my host, I run systemctl status systemd-networkd it show me
> 
> 
> 
> systemd-networkd[23507]: br-containers   : netdev ready
> systemd-networkd[23507]: eth0: gained carrier
> systemd-networkd[23507]: lo  : gained carrier
> systemd-networkd[23507]: br-containers   : link configured
> systemd-networkd[23507]: br-containers   : gained carrier
> systemd-networkd[23507]: vb-t1   : could not find udev device
> systemd-networkd[23507]: br-containers   : lost carrier
> systemd-networkd[23507]: br-containers   : gained carrier
> 
> 
> My container's log:
> systemd[1]: Starting Network Service...
> systemd-networkd[41]: Enumeration completed
> systemd[1]: Started Network Service.
> systemd-networkd[41]: host0: Could not bring up interface: Invalid
> argument systemd-networkd[41]: host0: Gained carrier
> systemd-networkd[41]: host0: Gained IPv6LL
> 
> 
> And I restart the host's systemd-networkd service, and re-run
> container again, that's solved, why ?

How do you run and shut down the container? Same setup [1] here works
without problems. Which systemd version?

Could you run "bridge link" before and after each step?

You can also run "bridge monitor" while performing the steps. It should
show you deleting two MACs from the bridge (both sides of the veth) if
shutting down a machine while first deconfiguring and shutting down
each interface just before deletion, and reverse when bringing the
machine up. Like this (running "machinectl terminate gentoo-mysql-base"
and "machinectl start gentoo-mysql-base" in another terminal):

# bridge monitor
6: vb-gentoo-mysq state DOWN @NONE:  mtu 1500 master 
br-containers
6: vb-gentoo-mysq state DOWN @NONE:  mtu 1500 master 
br-containers state disabled priority 32 cost 2
Deleted 62:2c:b9:14:ed:91 dev vb-gentoo-mysq master br-containers
6: vb-gentoo-mysq state DOWN @NONE:  mtu 1500 master 
br-containers state disabled priority 32 cost 2
6: vb-gentoo-mysq state DOWN @NONE:  mtu 1500 master 
br-containers state disabled priority 32 cost 2
Deleted 6: vb-gentoo-mysq state DOWN @NONE:  mtu 1500 
master br-containers
Deleted 3a:06:21:31:88:fa dev vb-gentoo-mysq master br-containers permanent
Deleted 3a:06:21:31:88:fa dev vb-gentoo-mysq vlan 1 master br-containers 
permanent
Deleted 6: vb-gentoo-mysq state DOWN @NONE:  mtu 1500
8: vb-gentoo-mysq state DOWN @enp5s0:  mtu 1500
8: vb-gentoo-mysq state UNKNOWN @enp5s0:  mtu 1500
8: vb-gentoo-mysq state UNKNOWN @enp5s0:  mtu 1500
8: vb-gentoo-mysq state UNKNOWN @enp5s0:  mtu 1500 
master br-containers
8: vb-gentoo-mysq state UNKNOWN @enp5s0:  mtu 1500 
master br-containers
3a:06:21:31:88:fa dev vb-gentoo-mysq master br-containers permanent
3a:06:21:31:88:fa dev vb-gentoo-mysq vlan 1 master br-containers permanent
8: vb-gentoo-mysq state UNKNOWN @enp5s0:  mtu 1500 
master br-containers state forwarding priority 32 cost 2
8: vb-gentoo-mysq state UNKNOWN @enp5s0:  mtu 1500 
master br-containers state forwarding priority 32 cost 2
8: vb-gentoo-mysq state UNKNOWN @enp5s0:  mtu 1500 
master br-containers state forwarding priority 32 cost 2
8: vb-gentoo-mysq state UNKNOWN @enp5s0:  mtu 1500 
master br-containers state forwarding priority 32 cost 2
8: vb-gentoo-mysq state UNKNOWN @enp5s0:  mtu 1500 
master br-containers
8: vb-gentoo-mysq state LOWERLAYERDOWN @enp5s0: 
 mtu 1500 master br-containers state 
disabled priority 32 cost 2
8: vb-gentoo-mysq state LOWERLAYERDOWN @enp5s0: 
 mtu 1500 master br-containers state 
disabled priority 32 cost 2
8: vb-gentoo-mysq state LOWERLAYERDOWN @enp5s0: 
 mtu 1500 master br-containers
8: vb-gentoo-mysq state UP @enp5s0:  mtu 1500 
master br-containers state forwarding priority 32 cost 2
8: vb-gentoo-mysq state UP @enp5s0:  mtu 1500 
master br-containers state forwarding priority 32 cost 2
8: vb-gentoo-mysq state UP @enp5s0:  mtu 1500 
master br-containers state forwarding priority 32 cost 2
8: vb-gentoo-mysq state UP @enp5s0:  mtu 1500 
master br-containers
62:2c:b9:14:ed:91 dev vb-gentoo-mysq master br-containers
dev br-containers port vb-gentoo-mysq grp 

Re: [systemd-devel] howto systemd restart one network device

2016-03-01 Thread Kai Krakow
Am Mon, 29 Feb 2016 22:59:17 -0800
schrieb J Decker :

> well that was the search
> 
> after everything is up how do I restart a single device so it will
> work in the meantime...

I usually do "systemctl restart systemd-networkd" which probably
restarts all devices. This worked for me all the time.

Maybe something like:

$ systemctl restart sys-subsystem-net-devices-enp5s0.device
  ^^
put your device here

Found by "systemctl | fgrep enp"

However, "systemctl show ..." indicates it doesn't support doing that.

-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] tunnel configuration broken

2016-02-29 Thread Kai Krakow
Am Mon, 29 Feb 2016 19:12:11 -0800
schrieb J Decker :

> I would have thought that naming 00-eth0.network; 01-eth1.network or
> something would start devices in that order?

No... It does say nothing about order in your sense. It's just ordering
which configuration overwrites another when options are specified in
multiple files.

So it's not start order, it just order of precedence for configuration
options.

-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-nspawn network ping

2016-02-29 Thread Kai Krakow
Am Mon, 29 Feb 2016 16:38:59 +0800 (CST)
schrieb kennedy  <kennedy...@163.com>:

> I think it will be routed, I tied ping 169.254.x.x container to
> container, and it's successed. And, I created the systemd-nspawn
> container after 5 minus ago, the container's host0 network auto turns
> ip on 169.254.x.x rather than 10.0.0.x .

No, it's not routed. You can ping from container to container because
they are connected to the same network segment (via the bridge, which
is a virtual switch - well, that's layer2 routing of course). A ping
won't leave the bridge: Try to ping from within the container to
something else in your LAN (remove the masquerading first, disable
DHCP, or use the source parameter on ping). It shouldn't work.

If you run "sudo ip addr" it will show link scope on these addresses,
not global.

You expectation was "host scope" - packets won't leave the host. A host
scope network is 127.0.0.0/8: Try running a web service on
127.0.0.2:80, now go to the other container and try curl on that
address: won't work because 127/8 has host scope.

Host scope: local loopback
Link scope: local network segment
Global scope: can pass segment boundaries

IPv6 has an additional site scope: Packets won't pass outbound network
segments (read: don't leave via the public external interface of a
boundary/edge router).

https://en.wikipedia.org/wiki/Link-local_address

> At 2016-02-29 15:40:10, "Kai Krakow" <hurikha...@gmail.com> wrote:
> 
> Hello!
> 
> 
> This is link local IP. You can see it like 127.0.0.0/8 but for the
> whole network segment: it won't be route thus it never leaves the
> bridge. I think you can safely ignore it tho I also think there is a
> way to deactivate assigning such addresses in networkd. AFAIK it's
> called APIPA.
> 
> 
> Regards,
> Kai
> 
> 
> kennedy <kennedy...@163.com> schrieb am Mo., 29. Feb. 2016 um 07:43
> Uhr:
> 
> Thanks! it works!
> but I had a little question.
> In my container, I run `route -n` it show me 3 result ,they are:
> 
> 
> Kernel IP routing table
> Destination Gateway Genmask Flags Metric Ref
> Use Iface 0.0.0.0 0.0.0.0 0.0.0.0 U
> 2048   00 host0 10.0.0.00.0.0.0
> 255.0.0.0   U 0  00 host0 169.254.0.0
> 0.0.0.0 255.255.0.0 U 0  00 host0
> 
> 
> What's the "169.254.0.0" comes from ? I'v never configured that.
> 
> 
> and when I add the "--network-veth" option on systemd-nspawn, the
> "169.254.0.0" it's disappeared.
> 
> 
> --
> Yours Sincerely
> Han
> 
> 
> 
> At 2016-02-29 00:26:54, "Kai Krakow" <hurikha...@gmail.com> wrote:
> >Am Sun, 28 Feb 2016 23:41:22 +0800 (CST)
> >schrieb kennedy <kennedy...@163.com>:
> >
> >> how to ping container to container each other in systemd-nspawn ?
> >> I've tried --network-veth option but it doesn't work enough.
> >
> >You need to join all host-side veth interfaces into the same bridge.
> >Make two files for systemd-networkd:
> >
> ># 99-bridge-cn.netdev
> >[NetDev]
> >Name=br-containers
> >Kind=bridge
> >[Match]
> >Name=br-containers
> >
> ># 99-bridge-cn.network
> >[Network]
> >Address=10.0.0.1/24
> >DHCPServer=yes
> >IPForward=yes
> >IPMasquerade=yes
> >
> >Then "systemctl --edit systemd-nspawn@.service" to contain the
> >following:
> >
> >
> >[Service]
> >ExecStart=
> >ExecStart=/usr/bin/systemd-nspawn --quiet --keep-unit --boot \
> >--link-journal=try-guest --private-network \
> >--network-bridge=br-containers --machine=%I
> >
> >
> >This will add all your container veth devices to the same bridge
> >which you configured in systemd-networkd. You should now be able to
> >ping each other.
> >
> >You may need to adjust a few more settings for your needs. I'd
> >recommend to add nss-mymachines (see man page).
> >
> >
> >-- 
> >Regards,
> >Kai
> >
> >Replies to list-only preferred.
> >
> >___
> >systemd-devel mailing list
> >systemd-devel@lists.freedesktop.org
> >https://lists.freedesktop.org/mailman/listinfo/systemd-devel


-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-nspawn network ping

2016-02-28 Thread Kai Krakow
Am Sun, 28 Feb 2016 23:41:22 +0800 (CST)
schrieb kennedy :

> how to ping container to container each other in systemd-nspawn ?
> I've tried --network-veth option but it doesn't work enough.

You need to join all host-side veth interfaces into the same bridge.
Make two files for systemd-networkd:

# 99-bridge-cn.netdev
[NetDev]
Name=br-containers
Kind=bridge
[Match]
Name=br-containers

# 99-bridge-cn.network
[Network]
Address=10.0.0.1/24
DHCPServer=yes
IPForward=yes
IPMasquerade=yes

Then "systemctl --edit systemd-nspawn@.service" to contain the
following:


[Service]
ExecStart=
ExecStart=/usr/bin/systemd-nspawn --quiet --keep-unit --boot \
--link-journal=try-guest --private-network \
--network-bridge=br-containers --machine=%I


This will add all your container veth devices to the same bridge which
you configured in systemd-networkd. You should now be able to ping each
other.

You may need to adjust a few more settings for your needs. I'd
recommend to add nss-mymachines (see man page).


-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd user sessions?

2016-02-24 Thread Kai Krakow
Am Wed, 24 Feb 2016 17:41:57 +0100
schrieb Krzysztof Kotlenga :

> Jon Stanley wrote:
> 
> > I'd like a systemd unit (and only that unit) to be controlled by a
> > specific user. The unit runs as this user, so I thought about user
> > instances of systemd. This service should be started when the system
> > starts, so you'd have to enable linger in systemd-logind for that to
> > work.
> >
> > The question is how to make the systemd user *service* start at
> > boot?
> 
> [Install]
> WantedBy=default.target
> 
> $ systemctl --user enable foo.service
> Created symlink
> from /home/user/.config/systemd/user/default.target.wants/foo.service
> to /home/user/.config/systemd/user/foo.service.
> $
> 
> That's pretty much it.

I don't think this is how it works... User services do not start at
system boot but at user session start. So without starting the user
session on boot, the user service won't start.

I think it needs to be a system service, then use polkit rules to
enable a user to control this system service.

-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] nss-mymachines: slow name resolution

2016-02-16 Thread Kai Krakow
Am Tue, 16 Feb 2016 19:39:26 +0100
schrieb Kai Krakow <hurikha...@gmail.com>:

> Am Tue, 16 Feb 2016 15:35:24 +0100
> schrieb Lennart Poettering <lenn...@poettering.net>:
> 
> > On Mon, 15.02.16 21:32, Kai Krakow (hurikha...@gmail.com) wrote:
> > 
> > > Am Mon, 15 Feb 2016 14:28:19 +0100
> > > schrieb Lennart Poettering <lenn...@poettering.net>:
> > > 
> > > > On Sun, 14.02.16 13:49, Kai Krakow (hurikha...@gmail.com) wrote:
> > > > 
> > > > > Hello!
> > > > > 
> > > > > I've followed the man page guide to setup mymachines name
> > > > > resolution in nsswitch.conf. It works. But it takes around 4-5
> > > > > seconds to resolve a name. This is unexpected and cannot be
> > > > > used in production.
> > > > 
> > > > This sounds like the LLMNR timeout done. I figure we should fix
> > > > the docs to suggest that "mymachines" appears before "resolve"
> > > > in nsswitch.conf. That should fix your issue...
> > > 
> > > Apparently it doesn't fix it - although I will leave it in this
> > > order according to your recommendation.
> > > 
> > > Is there a way to globally disable LLMNR altogether to nail it
> > > down? I tried setting LLMNR=false in *.network - didn't help.
> > 
> > Use the LLMNR= setting in /etc/systemd/resolved.conf
> 
> Yeah! *thumbsup* You da man, Lennart!
> 
> Setting LLMNR to "resolve" or to "no" globally solves the problem
> which proves your first suspicion.

BTW: Enabling and starting avahi also fixed the problem (at least it
looks like, did a few other steps), although I don't see it listening
on port 5353.

> Now, how can I figure out which interface is the problematic one? Do I
> actually need LLMNR in a simple home network?
> 
> The long term is to use this in a container based hosting environment.
> I'm pretty sure I actually don't need LLMNR there. So I'm just curious
> how to "optimize" my home setup.


-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] nss-mymachines: slow name resolution

2016-02-16 Thread Kai Krakow
Am Tue, 16 Feb 2016 15:35:24 +0100
schrieb Lennart Poettering <lenn...@poettering.net>:

> On Mon, 15.02.16 21:32, Kai Krakow (hurikha...@gmail.com) wrote:
> 
> > Am Mon, 15 Feb 2016 14:28:19 +0100
> > schrieb Lennart Poettering <lenn...@poettering.net>:
> > 
> > > On Sun, 14.02.16 13:49, Kai Krakow (hurikha...@gmail.com) wrote:
> > > 
> > > > Hello!
> > > > 
> > > > I've followed the man page guide to setup mymachines name
> > > > resolution in nsswitch.conf. It works. But it takes around 4-5
> > > > seconds to resolve a name. This is unexpected and cannot be
> > > > used in production.
> > > 
> > > This sounds like the LLMNR timeout done. I figure we should fix
> > > the docs to suggest that "mymachines" appears before "resolve" in
> > > nsswitch.conf. That should fix your issue...
> > 
> > Apparently it doesn't fix it - although I will leave it in this
> > order according to your recommendation.
> > 
> > Is there a way to globally disable LLMNR altogether to nail it
> > down? I tried setting LLMNR=false in *.network - didn't help.
> 
> Use the LLMNR= setting in /etc/systemd/resolved.conf

Yeah! *thumbsup* You da man, Lennart!

Setting LLMNR to "resolve" or to "no" globally solves the problem which
proves your first suspicion.

Now, how can I figure out which interface is the problematic one? Do I
actually need LLMNR in a simple home network?

The long term is to use this in a container based hosting environment.
I'm pretty sure I actually don't need LLMNR there. So I'm just curious
how to "optimize" my home setup.

-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] nss-mymachines: slow name resolution

2016-02-15 Thread Kai Krakow
Am Mon, 15 Feb 2016 14:28:19 +0100
schrieb Lennart Poettering <lenn...@poettering.net>:

> On Sun, 14.02.16 13:49, Kai Krakow (hurikha...@gmail.com) wrote:
> 
> > Hello!
> > 
> > I've followed the man page guide to setup mymachines name
> > resolution in nsswitch.conf. It works. But it takes around 4-5
> > seconds to resolve a name. This is unexpected and cannot be used in
> > production.
> 
> This sounds like the LLMNR timeout done. I figure we should fix the
> docs to suggest that "mymachines" appears before "resolve" in
> nsswitch.conf. That should fix your issue...

Apparently it doesn't fix it - although I will leave it in this order
according to your recommendation.

Is there a way to globally disable LLMNR altogether to nail it down? I
tried setting LLMNR=false in *.network - didn't help.

-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] nss-mymachines: slow name resolution

2016-02-14 Thread Kai Krakow
Hello!

I've followed the man page guide to setup mymachines name resolution in
nsswitch.conf. It works. But it takes around 4-5 seconds to resolve a
name. This is unexpected and cannot be used in production.

I'm using systemd-networkd and systemd-resolved.

This is my config:

# /etc/nsswitch.conf:
# $Header: /var/cvsroot/gentoo/src/patchsets/glibc/extra/etc/nsswitch.conf,v 
1.1 2006/09/29 23:52:23 vapier Exp $

passwd:  compat mymachines
shadow:  compat
group:   compat mymachines

# passwd:db files nis
# shadow:db files nis
# group: db files nis

hosts:   files resolve mymachines myhostname
networks:files

services:db files
protocols:   db files
rpc: db files
ethers:  db files
netmasks:files
netgroup:files
bootparams:  files

automount:   files
aliases: files


Not sure if the errors below are related:

$ systemctl status systemd-{network,resolve}d
● systemd-networkd.service - Network Service
   Loaded: loaded (/usr/lib64/systemd/system/systemd-networkd.service; enabled; 
vendor preset: enabled)
   Active: active (running) since Sa 2016-02-13 13:40:51 CET; 24h ago
 Docs: man:systemd-networkd.service(8)
 Main PID: 763 (systemd-network)
   Status: "Processing requests..."
Tasks: 1 (limit: 512)
   Memory: 1.1M
  CPU: 400ms
   CGroup: /system.slice/systemd-networkd.service
   └─763 /usr/lib/systemd/systemd-networkd

Feb 14 13:11:27 jupiter.sol.local systemd-networkd[763]: Could not send 
rtnetlink message: Invalid argument
Feb 14 13:11:27 jupiter.sol.local systemd-networkd[763]: Could not remove 
route: Invalid argument
Feb 14 13:18:22 jupiter.sol.local systemd-networkd[763]: Could not send 
rtnetlink message: Invalid argument
Feb 14 13:18:22 jupiter.sol.local systemd-networkd[763]: Could not remove 
route: Invalid argument
Feb 14 13:25:43 jupiter.sol.local systemd-networkd[763]: Could not send 
rtnetlink message: Invalid argument
Feb 14 13:25:43 jupiter.sol.local systemd-networkd[763]: Could not remove 
route: Invalid argument
Feb 14 13:30:23 jupiter.sol.local systemd-networkd[763]: Could not send 
rtnetlink message: Invalid argument
Feb 14 13:30:23 jupiter.sol.local systemd-networkd[763]: Could not remove 
route: Invalid argument
Feb 14 13:39:57 jupiter.sol.local systemd-networkd[763]: Could not send 
rtnetlink message: Invalid argument
Feb 14 13:39:57 jupiter.sol.local systemd-networkd[763]: Could not remove 
route: Invalid argument

● systemd-resolved.service - Network Name Resolution
   Loaded: loaded (/usr/lib64/systemd/system/systemd-resolved.service; enabled; 
vendor preset: enabled)
   Active: active (running) since Sa 2016-02-13 13:40:51 CET; 24h ago
 Docs: man:systemd-resolved.service(8)
 Main PID: 824 (systemd-resolve)
   Status: "Processing requests..."
Tasks: 1 (limit: 512)
   Memory: 1.0M
  CPU: 2.122s
   CGroup: /system.slice/systemd-resolved.service
   └─824 /usr/lib/systemd/systemd-resolved

Feb 13 13:40:51 jupiter.sol.local systemd[1]: Starting Network Name 
Resolution...
Feb 13 13:40:51 jupiter.sol.local systemd-resolved[824]: Using system hostname 
'jupiter'.
Feb 13 13:40:51 jupiter.sol.local systemd[1]: Started Network Name Resolution.
Feb 13 13:40:56 jupiter.sol.local systemd-resolved[824]: Switching to DNS 
server 192.168.4.254 for interface enp5s0.

-- 
Regards,
Kai

Replies to list-only preferred.


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] nss-mymachines: slow name resolution

2016-02-14 Thread Kai Krakow
Am Sun, 14 Feb 2016 13:49:01 +0100
schrieb Kai Krakow <hurikha...@gmail.com>:

> Hello!
> 
> I've followed the man page guide to setup mymachines name resolution
> in nsswitch.conf. It works. But it takes around 4-5 seconds to
> resolve a name. This is unexpected and cannot be used in production.
> 
> I'm using systemd-networkd and systemd-resolved.

Some further investigation shows it is exactly 5 seconds because that
is the timeout e.g. "ping" and "ssh" use for the poll() call when I
strace the programs.

I then tried "ltrace" and it shows hanging in gethostbyname().

This behaviour is independent of resolver order in nsswitch.conf.

In contrast:

dig immediately returns, with expected result NXDOMAIN has my nspawns
machines are not registered in a DNS zone.

"getent hosts" returns immediately, yielding the correct IP.

gethostip shows the same behavior as ping and ssh.

What is special in this case? Why the timeout of 5 seconds?

> This is my config:
> 
> # /etc/nsswitch.conf:
> #
> $Header: /var/cvsroot/gentoo/src/patchsets/glibc/extra/etc/nsswitch.conf,v
> 1.1 2006/09/29 23:52:23 vapier Exp $
> 
> passwd:  compat mymachines
> shadow:  compat
> group:   compat mymachines
> 
> # passwd:db files nis
> # shadow:db files nis
> # group: db files nis
> 
> hosts:   files resolve mymachines myhostname
> networks:files
> 
> services:db files
> protocols:   db files
> rpc: db files
> ethers:  db files
> netmasks:files
> netgroup:files
> bootparams:  files
> 
> automount:   files
> aliases: files
> 
> 
> Not sure if the errors below are related:
> 
> $ systemctl status systemd-{network,resolve}d
> ● systemd-networkd.service - Network Service
>Loaded: loaded
> (/usr/lib64/systemd/system/systemd-networkd.service; enabled; vendor
> preset: enabled) Active: active (running) since Sa 2016-02-13
> 13:40:51 CET; 24h ago Docs: man:systemd-networkd.service(8) Main PID:
> 763 (systemd-network) Status: "Processing requests..."
> Tasks: 1 (limit: 512)
>Memory: 1.1M
>   CPU: 400ms
>CGroup: /system.slice/systemd-networkd.service
>└─763 /usr/lib/systemd/systemd-networkd
> 
> Feb 14 13:11:27 jupiter.sol.local systemd-networkd[763]: Could not
> send rtnetlink message: Invalid argument Feb 14 13:11:27
> jupiter.sol.local systemd-networkd[763]: Could not remove route:
> Invalid argument Feb 14 13:18:22 jupiter.sol.local
> systemd-networkd[763]: Could not send rtnetlink message: Invalid
> argument Feb 14 13:18:22 jupiter.sol.local systemd-networkd[763]:
> Could not remove route: Invalid argument Feb 14 13:25:43
> jupiter.sol.local systemd-networkd[763]: Could not send rtnetlink
> message: Invalid argument Feb 14 13:25:43 jupiter.sol.local
> systemd-networkd[763]: Could not remove route: Invalid argument Feb
> 14 13:30:23 jupiter.sol.local systemd-networkd[763]: Could not send
> rtnetlink message: Invalid argument Feb 14 13:30:23 jupiter.sol.local
> systemd-networkd[763]: Could not remove route: Invalid argument Feb
> 14 13:39:57 jupiter.sol.local systemd-networkd[763]: Could not send
> rtnetlink message: Invalid argument Feb 14 13:39:57 jupiter.sol.local
> systemd-networkd[763]: Could not remove route: Invalid argument
> 
> ● systemd-resolved.service - Network Name Resolution
>Loaded: loaded
> (/usr/lib64/systemd/system/systemd-resolved.service; enabled; vendor
> preset: enabled) Active: active (running) since Sa 2016-02-13
> 13:40:51 CET; 24h ago Docs: man:systemd-resolved.service(8) Main PID:
> 824 (systemd-resolve) Status: "Processing requests..."
> Tasks: 1 (limit: 512)
>Memory: 1.0M
>   CPU: 2.122s
>CGroup: /system.slice/systemd-resolved.service
>└─824 /usr/lib/systemd/systemd-resolved
> 
> Feb 13 13:40:51 jupiter.sol.local systemd[1]: Starting Network Name
> Resolution... Feb 13 13:40:51 jupiter.sol.local
> systemd-resolved[824]: Using system hostname 'jupiter'. Feb 13
> 13:40:51 jupiter.sol.local systemd[1]: Started Network Name
> Resolution. Feb 13 13:40:56 jupiter.sol.local systemd-resolved[824]:
> Switching to DNS server 192.168.4.254 for interface enp5s0.

-- 
Regards,
Kai

Replies to list-only preferred.


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd default target

2016-02-13 Thread Kai Krakow
Am Sat, 13 Feb 2016 14:34:59 -0800
schrieb Pathangi Janardhanan :

> Hi,
> 
>  I have some services configured with the above, Restart=on-failure
> and StartLimitInterval/StartLimitBurst and also StartLimitAction with
> reboot.
> 
>  The problem I am trying to look at is, how to get out of a continuous
> reboot cycle, if one of the services is failing. That is why I was
> looking for any options where systemd can have its default target
> changed to recover or something equivalent, if the system is
> undergoing repeated reboots within a certain interval?
> 
>  I may be able to do this by writing a separate unit/service for
> this, but was wondering if there was any already inbuilt mechanism to
> prevent continuous  reboot cycle from a mis-behaving service/unit,
> which is configured for reload on failure?

If you're using dracut-generated initrd you could simply add
"emergency" or "rescue" to the kernel cmdline. I think you can also
explicitly set a systemd.target=... variable at the cmdline, tho I'm not
sure about it.

> On Sat, Feb 13, 2016 at 2:48 AM, Lennart Poettering
>  wrote:
> 
> > On Fri, 12.02.16 20:58, Pathangi Janardhanan (path.j...@gmail.com)
> > wrote:
> >
> > > Hi All,
> > >
> > >  The default target is usually set to multi-user or someother
> > > equivalent target. Is there any way in systemd that I can say
> > > something like
> > >
> > > " If the system is reloaded n number of times within the last x
> > > second", than set the default target to recover or emergency mode
> > > etc.?
> > >
> > >  Basically I am looking for recovering from a failing
> > > unit/service that
> > is
> > > forcing the system to reboot, and that is going in a cycle?
> >
> > You may configure whether a unit shall be automatically restarted
> > with Restart=on-failure. You may put a limit on it with
> > StartLimitInterval= and StartLimitBurst=. You may configure what
> > shall happen if the limit is hit with StartLimitAction=, which
> > includes making the system reboot.
> >
> > See the systemd.service and systemd.unit man pages.
> >
> > Lennart
> >
> > --
> > Lennart Poettering, Red Hat
> >
> 


-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Settings in /etc/systemd/journald.conf dont work

2016-02-02 Thread Kai Krakow
Am Tue, 2 Feb 2016 11:09:51 +0100
schrieb Tommy_Lu :

> Hello
> 
> I am an old retired boy from German and a short-time visitor here in 
> this list. And I apologize, because I put my user-question here. But 
> unfortunately nowhere in the network i can found a solution or people 
> who have sufficient expertise. My Problem relates to "journalctl". I 
> want that Entries in my journal only the last 7 days are kept, 
> regardless of Journals Filesize. Therefor I have entered in the 
> /etc/systemd/journald.conf the following parameters:
> Storage = auto
> MaxRetentionSec = 1week
> 
> But nothing happens. The entries (older 7 days) will not be removed
> in runtime or after reboot. Even the entries in the
> "User-Actions-Journal" are not removed . Only when I reboot the
> Journals daemon the Systemjournal is completely emptied, what is also
> wrong. However, the Userjournal will remain unchanged and continue to
> contain month old data. What I would have to stop or do that I get
> these same 7 Days for both journals. I know that there are also
> "size-settings" are possible, but I'd like to set these 7-day cycle.
> 
> I use "Debian Jessie" as a Debootstrap-Setup with some selected LXDE 
> components for a custom Desktop_GUI.
> apt show systemd
> Package: systemd
> Version: 215-17+deb8u3
> 
> Can anyone help me with a advice? Many thanks for your help.
> Thomas

I think the following might happen: both limits (size-based and
time-based) still apply, and journald respects the limit that keeps
more journal entries. Thus, you may want to try to set SystemMaxUse to
a much smaller limit.

More likely: I pretty sure time-based retention can only be applied
during file rotation - means: when your per-file size limit is high, the
contained entries will not be removed until file rotation occurs - but
then they all are going to removed as the whole log file (all or
nothing).

Thus, you may also want experimenting with lowering the per-file size
limits so rotation occurs more often.

But this also means you will never get rid of old entries on a
per-hour, not even a per-day granularity. The settings only guarantee
that you always see the entries within the limits. It does not,
however, guarantee that you will never see entries outside of the
limits. But from an admin point of view that is perfectly sensible.
This is the guarantee you actually want.

So your only way is to tighten the granularity of log files, then use
the limit settings to throw away whole log database files.

If you want to see only specific entries back to some point in time,
you have to use a date filter when calling journalctl. Retention is
totally different approach and works as designed. There's a difference
between retention and filtering.

-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Should pam-activated systemd --user daemon quit after session has been closed?

2016-01-22 Thread Kai Krakow
Am Fri, 22 Jan 2016 23:19:11 +0100
schrieb "Armin K." :

> I use KDE Plasma 5 and lightdm as a display manager to login to the
> session. Once logged in, the lightdm (I guess it's using
> pam_systemd.so) login service will also start systemd user session
> and a session dbus daemon. The rest of Plasma follows.
> 
> The problem is, that whenever I log out from Plasma, the systemd user
> session isn't terminated and as such leaves the user bus daemon and
> lots of services around, which makes shutdown hang (if I initiated
> the shutdown, which will first initiate log out) for some time (90
> seconds by default until it's forcibly killed by systemd) which I
> rather find annoying. I can see the systemd user session is the
> culprit, because I can see "Waiting for session for user 1000 to
> terminate [timer]" or something like that.
> 
> When I just log out, I can manually stop the systemd user session
> using loginctl kill-user/kill-session (I am not sure which one I used
> last time), which will terminate user bus and all the other services
> from that session.
> 
> Now, to the original question: Once PAM closes the session (once
> logout is received), should systemd --user daemon terminate as well?
> Currently, that's not the case on my system.

Which distribution do you use? Did you properly load the systemd bits
in pam? Usually, the systemd user session should end as soon as pam
closes the session - except there's still a process around (this is
probably why KillUserProcesses improved the situation).

My first guess is: Does your Xsession try to spawn dbus itself? Have you
tried commenting it out? Should be in /etc/X11 or somewhere in the
session files installed by lightdm.

My Xsession files look if there's already a dbus around (probably
provided by systemd user session) and only spawns one if there's none.

If you identify the problematic process you could try to kill it using
KDE's shutdown scripts (look in "vi $(which startkde)" to see
where it looks for shutdown scripts). It'll be a workaround but may
help.


-- 
Regards,
Kai

Replies to list-only preferred.


signature.asc
Description: PGP signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Should pam-activated systemd --user daemon quit after session has been closed?

2016-01-22 Thread Kai Krakow
Am Fri, 22 Jan 2016 23:19:11 +0100
schrieb "Armin K." :

> I have a following problem on my system and it has been bothering me
> for some time.
> 
> I use KDE Plasma 5 and lightdm as a display manager to login to the
> session. Once logged in, the lightdm (I guess it's using
> pam_systemd.so) login service will also start systemd user session
> and a session dbus daemon. The rest of Plasma follows.
> 
> The problem is, that whenever I log out from Plasma, the systemd user
> session isn't terminated and as such leaves the user bus daemon and
> lots of services around, which makes shutdown hang (if I initiated
> the shutdown, which will first initiate log out) for some time (90
> seconds by default until it's forcibly killed by systemd) which I
> rather find annoying. I can see the systemd user session is the
> culprit, because I can see "Waiting for session for user 1000 to
> terminate [timer]" or something like that.
> 
> When I just log out, I can manually stop the systemd user session
> using loginctl kill-user/kill-session (I am not sure which one I used
> last time), which will terminate user bus and all the other services
> from that session.
> 
> Now, to the original question: Once PAM closes the session (once
> logout is received), should systemd --user daemon terminate as well?
> Currently, that's not the case on my system.
> 
> Let me know if I can provide any additional info.

Have you tried SDDM instead of LightDM... SDDM is recommended for KDE
Plasma 5 afaik. I had similar issues with shutdown when I still used
LightDM. Tho I never tracked it back due to an orphan user session -
good catch.


-- 
Regards,
Kai

Replies to list-only preferred.


signature.asc
Description: PGP signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Should pam-activated systemd --user daemon quit after session has been closed?

2016-01-22 Thread Kai Krakow
Am Sat, 23 Jan 2016 00:20:34 +0100
schrieb "Armin K." :

> > My first guess is: Does your Xsession try to spawn dbus itself?
> > Have you tried commenting it out? Should be in /etc/X11 or
> > somewhere in the session files installed by lightdm.
> >   
> 
> There's only one dbus user daemon. I have no XSession (cough,
> lightdm, cough), and no files in /etc/X11 that start one.

Even lightdm uses session scripts. They are just not where you expect
them (read: not in /etc/X11).

SDDM has them here:
/usr/share/sddm/scripts

These even source some scripts from /etc/X11 (at least in Gentoo).

I don't remember where lightdm stored those.

-- 
Regards,
Kai

Replies to list-only preferred.


signature.asc
Description: PGP signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Additional error details when resource limits are exceeded

2015-12-23 Thread Kai Krakow
Am Wed, 23 Dec 2015 22:55:13 +0800
schrieb Peter Hoeg :

> Hi,
> 
> >Type=simple cannot detect when a service is ready. Systemd simply
> >of teamviewerd but with service inter-dependencies this becomes
> >important.
> >
> >Type=simple considers the service up immediatly thus triggering
> >dependent service for immediate execution while Type=forking
> >considers the service up only when appearance of the PIDFile signals
> >so, and only then schedules starting dependent services.
> >
> >So, Type=forking is the only way to have synchronization points
> >between service that depend on each other.
> 
> In all fairness, the presence of a PID really doesn't say anything
> either about availability. The only way to be sure is to use
> Type=Notify with a cooperating daemon.

Okay, remove "only" from my sentence. It is _one_ way to signal the
service manager that a service is ready. Apparently, some services
(like MySQL) do early forking and write the PID file, and only then do
their initializations. Still, it's better in most cases
than Type=simple until Type=notify becomes more widely used (as you
write it needs cooperation of the service). Apparently, notifying
requires including a small systemd specific lib (which does no harm
when used on non-systemd systems) as far as I understood. But many
upstreams and even more users would deny or rage against including such
a library, just because it says "systemd" on its name tag. :-(

> >> I haven't looked at the code, but again, if systemd knows that some
> >> limit is being exceeded, wouldn't it make sense to say which one?
> >
> >Probably yes... Maybe Lennart pays attention?
> 
> Otherwise that might be fun xmas project for an enterprising young
> coder!

Or that way... :-)

Let Lennart have a calm xmas and all of you others, too.


-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Query regarding "EnvironmentFile"

2015-12-21 Thread Kai Krakow
Am Thu, 10 Dec 2015 01:08:34 +0100
schrieb Reindl Harald :

> Am 09.12.2015 um 20:46 schrieb Lennart Poettering:
> > I probably should never have added EnvironmentFile= in the first
> > place. Packagers misunderstand that unit files are subject to admin
> > configuration and should be treated as such, and that spliting out
> > configuration of unit files into separate EnvironmentFiles= is a
> > really non-sensical game of unnecessary indirection
> 
> i strongly disagree

I disagree, too, about "should never have added". But I totally agree
with the point that packagers misunderstood the purpose of those files.
And this whole discussion perfectly shows it.

> it's the easiest way to not touch/copy the systemd-unit *and* 
> systemd-snippets for just adjust a simple variable - the point here
> is simple
> 
> copy units and/or add own snippets has easily two side effects
> 
> * don't get well deserved updates for the units
> * or snippets don't play well with later dist-versions of the unit
> 
> a EnvironmentFile supported by the distributions unit is well better
> for simple adoptions

An environment file should never be used to configure the runtime
behavior of a service [1], e.g. trigger conditionals in config files in
a way that they are not propagated during a service reload.

Instead, they are very useful in service instantiation by using the
servicename@.service templating pattern. E.g. for nspawn services you
could create different configuration options that apply for container
boots but not during runtime. Or you could create different instances
of MySQL. Or, or, ... I'm sure you get the point. For me this is almost
the only valid point in using such files.

All other purposes are ugly remnants of sysvinit workarounds to cope
with completely unmaintainable and complex scripts.

So, environment variables should go into service overrides or
directly into config files which you edit instead of creating new config
files. If you apply your own config management, then well, go with an
environment file if you feel so. I think it's okay. But this should
never ever be shipped as a default distribution configuration (but it
could be valid as an example, with proper documentation of the
side-effects this can have). You still should or want to modify your
exec line because:

The problem with a purpose of not modifying the exec line is that
vanishing options may be silently ignored during upgrades while after
an upgrade the service may instead fail if you added a now
incompatible/deprecated option. And that is a good thing [tm]. The
latter way makes debugging upgrade problems a lot easier and makes
better upgrade paths. If you totally feel like not putting your
production server into problems (and you really should), you have a
staging server anyways where you apply updates first, then fix things,
then apply the upgraded configuration management to the farm along with
the package updates. Virtualization and/or containers make that easy
and cheap.

Concluding: The option should stay but please package maintainers
remove your ugly cruft. This is something I constantly struggle with
the way it is done in Gentoo for elasticsearch. But actually it's
encouraged that way by the developers probably - and that's now the
outcome of this pita.

Thus: Please maintainers and developers, remove it. Do not let Lennart
remove this useful option to force others into removing your shitty
cruft. A service file should always do the correct thing upon invoking
either reload or restart respectively. Let the admin break this rule if
she or he feels like doing so. But do not ship that broken behavior as
a default.

Regarding the OP and a few follow ups a German saying comes to my mind:
"Das kannst'e zwar so machen, is' aber scheiße" :-) So let's better
support him into fixing his configuration and doing it the proper way
instead of insisting who is right or who is wrong. The concept of
environments files is clearly misused in many cases. Nothing is really
bad about it.

[1]: Unless you know what you're doing - but as stated it should not
come by default and thus offer a way to easily screw your system.

-- 


signature.asc
Description: PGP signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Query regarding "EnvironmentFile"

2015-12-21 Thread Kai Krakow
Am Mon, 21 Dec 2015 22:41:04 +0100
schrieb Marc Haber <mh+systemd-de...@zugschlus.de>:

> On Mon, Dec 21, 2015 at 10:18:05PM +0100, Kai Krakow wrote:
> > Thus: Please maintainers and developers, remove it. Do not let
> > Lennart remove this useful option to force others into removing
> > your shitty cruft.
> 
> This is exactly why systemd is the top one most hated piece of open
> source software. We are not here to be educated about the one and only
> right way of doing things.
> 
> Unix used to be about choice.

I'm not arguing about choice, neither I wrote anything against it. But
a distribution and upstreams should not ship with ugly concepts of
configuration or even tied to specifica of some distributions. How many
upstreams are out there who ship the debian concept as the one and
only, others ship the SuSE concept (sysconfig) which in turn is similar
but incompatible to Redhat.

Please tell me: Where is this about choice?

Choice is if you decide yourself (or your distribution) to adapt a
configuration concept. And that is why I'm pro keeping the
EnvironmentFile option. In essence, this means: You are still and will
stay in choice.

Please read the whole post before answering.
 
> Too bad that we allowed this to be no longer the case. Linux is no
> longer about choice. Linux nowadays is about what the systemd people
> want.

That's wrong. You maybe just cannot adapt to new, more modern, and more
sane concepts and defaults. You are then probably also the type of
people which are forcing upstream to remove options so nobody can misuse
them any longer.

Systemd is vastly different from concepts we had for the last 50 years
(or so), but linux really needs this modernization. Many concepts are
broken in term of modern computing. Yes, it still works. Yes, never
break a working system. But we really need new concepts, we are the
generation of laying out the proper path and a solid foundation for
following generations of sysadmins. This also means that systemd will
stay in flux for a few more years.

Really... Don't get me wrong. I don't mean that personally. But I can
watch the same behavior among my workmates - and it's almost always a
problem of not willing to learn and handle new concepts.

Nobody is taking choice away, we just need to learn to apply it
properly and in a new way. And this forum should be of discussing a way
to do it instead of starting to hate each other because someone else
took our toys away.

If you are using systemd, it's time to rethink some of your concepts
and re-apply them in the most straight-forward way. Mixing runtime,
default, startup (etc etc) configuration with each other is really a
pain - and not the way to do. It has never been. Even in sysvinit. But
many distributions and upstreams weakened this separation.

And shipping default configurations that make it easy to miss issues
with deprecated custom configuration has ever been bad. I think this is
what the devs in this thread are talking about at its core.

> Too bad that we gave the systemd people the power of forcing us to run
> our systems their way.

Then, don't run systemd. Nobody forces you. It was your choice to use a
distribution which migrated to systemd as the one and only option.
Thus, it's not the systemd people forcing you. It's your distribution.
Complain there. Or switch to a distribution which allows for more
choice. Or put simply: Deal with it.

Systemd is also not overtaking upstreams. The concept allows to leave
out all the systemd bits. Configuration concepts may change, however.
Still, if systemd gets pulled in by packages, it's probably the
distributions "fault" (except for maybe Gnome, I don't use it so I
don't know about the options).

But we probably need to find ways to support configurations concepts
which cope with systemd and non-systemd installations. In the end, its
about choice.

I cannot see anything here in the thread which would disallow continue
using non-systemd installations.

-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Additional error details when resource limits are exceeded

2015-12-21 Thread Kai Krakow
Am Tue, 22 Dec 2015 08:41:14 +0800
schrieb Peter Hoeg :

> Hi,
> 
> >[Service]
> >Type=forking
> >PIDFile=/run/teamviewerd.pid
> >ExecStart=/opt/teamviewer10/tv_bin/teamviewerd -d
> >Restart=on-abort
> >StartLimitInterval=60
> >StartLimitBurst=10
> 
> The alternative ExecStart I'm using:
> 
> ExecStart=/opt/teamviewer10/tv_bin/teamviewerd -f
> 
> And then you can get rid of PIDFile and Type.

Type=simple cannot detect when a service is ready. Systemd simply
assumes it works as soon as execution starts. It doesn't matter in case
of teamviewerd but with service inter-dependencies this becomes
important.

Type=simple considers the service up immediatly thus triggering
dependent service for immediate execution while Type=forking considers
the service up only when appearance of the PIDFile signals so, and
only then schedules starting dependent services.

So, Type=forking is the only way to have synchronization points between
service that depend on each other.


> >Resource limit exhaustion (in kernel sense) is usually easily to be
> >found in dmesg. I think this wasn't the case here. I propose it was
> >just because of StartLimitBurst?
> 
> I haven't looked at the code, but again, if systemd knows that some
> limit is being exceeded, wouldn't it make sense to say which one?

Probably yes... Maybe Lennart pays attention?

The message you were seeing usually applies to resource limits as in
"respawn limits & friends", not as in "RLIMIT" (kernel). Otherwise you
should explicitly see messages about "RLIMIT".

-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Additional error details when resource limits are exceeded

2015-12-21 Thread Kai Krakow
Am Wed, 9 Dec 2015 11:45:00 +0800
schrieb Peter Hoeg :

> Hi,
> 
> it turns out that the teamviewer daemon wasn't behaving correctly and
> double-forked before the PID file was written. Fixed by running it as
> Type=simple and in the foreground.
> 
> It however, still doesn't change anything about the error message,
> which is just plain wrong in this case as opposed to unhelpful.
> 
> Is there anything I can provide or do to help with this?

AFAIR I had the same problem and I think it was respawning itself over
and over again because of this PID file issue.

I fixed it by using a different service file (still forking):


[Unit]
Description=TeamViewer remote control daemon
After=network.target

[Service]
Type=forking
PIDFile=/run/teamviewerd.pid
ExecStart=/opt/teamviewer10/tv_bin/teamviewerd -d
Restart=on-abort
StartLimitInterval=60
StartLimitBurst=10

[Install]
WantedBy=graphical.target


The original included different redundant deps to NetworkManager, dbus,
network-online and NetworkManager-wait-online which completely exploded
somehow. Not sure if this was distro or upstream specific (Gentoo).

Resource limit exhaustion (in kernel sense) is usually easily to be
found in dmesg. I think this wasn't the case here. I propose it was
just because of StartLimitBurst?

> 
> --
> Regards,
> Peter
> 
> On 15-12-03 at 12:42, Peter Hoeg wrote:
> >Hi,
> >
> >I'm using systemd 228-3 on Arch Linux (up-to-date as of time of
> >writing) and am having an issue figuring out why a particular
> >service fails to run.
> >
> >The message I am getting is "Job for teamviewerd.service failed
> >because a configured resource limit was exceeded." but how do I
> >figure out WHICH resource limit is causing this?
> >
> >What I have tried:
> >
> >1) checking "systemctl status teamviewerd.service" and "journalctl
> >-xe" as mentioned by systemd 2) bumping up the various LimitXXX
> >configuration items in the unit file 3) adding
> >Environment=SYSTEMD_LOG_LEVEL=debug to the unit file
> >
> >All in vain.
> >
> >However running the ExecStart line manually using sudo works fine.
> >
> >The fact that this is teamviewer really doesn't matter - it just
> >happened to be where I noticed it, but providing additional info on
> >this error would be very welcome.


-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd (user) and (sd-pam) (user) processes in login shell

2015-12-21 Thread Kai Krakow
Am Tue, 8 Dec 2015 01:36:01 +0200
schrieb Mantas Mikulėnas :

> What uid does "oracle" have – is it within the system account range
> (usually 1–999) or user account (1000–)? I wonder if it's the latter,
> which would mean systemd-logind would clean up various things like
> IPC on logout... (see logind.conf)

Is this hard-coded in systemd (uid 0..999 and 1000+) or is it read from
login.defs?

Because I cannot find anything related to it in logind.conf which leads
me to the assumption your reference was about RemoveIPC and friends
only...

-- 
Regards,
Kai

Replies to list-only preferred.


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Query regarding "EnvironmentFile"

2015-12-21 Thread Kai Krakow
Am Mon, 21 Dec 2015 23:29:57 +0100
schrieb Marc Haber <mh+systemd-de...@zugschlus.de>:

> On Mon, Dec 21, 2015 at 11:14:43PM +0100, Kai Krakow wrote:
> > I cannot see anything here in the thread which would disallow
> > continue using non-systemd installations.
> 
> The problem is that many concepts of systemd are really nice. Once
> wants to have things like that.

I can totally second that.

> The problem is that a minoriy of concepts and the attitude of the
> makers make working with systemd a constant source of increased blood
> pressure and a strong urge to break something expensive just to get
> rid of the aggression.

Yep, I probably cannot deny that.

But to push forward a software like systemd and make it successful you
probably need that kind of attitude. And without this "hard course"
systemd wouldn't be what it is now - it wouldn't have those strong,
well-thought concepts but instead be some set of tools trying to do
everything somehow but almost nothing truly correct (correct in terms
of stability and reliability). Nobody would've noticed, few would have
adapted it, and probably no one would continue to use it.

Look at the kernel, with Linus. His attitude surely conflicts with one
or another person, probably even a lot of them. But his straight course
made the kernel a success. It is stable because it doesn't let everyone
introduce some broken concepts.

But one could argue that backwards compatibility is probably handled
different (but systemd is a new project while the kernel is not). Tho,
old binaries probably wont run under modern kernels - what is probably
related to glibc mostly or dropped aout support.

And getting to glibc, I remember there have been "attitude
compatibility issues" with the devs, too, in the past. And people who
forked the project... Yeah... I think many did not even notice. Glibc
just worked for them.

So, every project probably has this type of person or dev. And those
people are needed for the long-term success of the project. And given
the loudness of some people here I can totally understand Lennart
reacting in the one or other way - even if it hits people who don't
deserve it.

Back to concepts: I'm always trying to find my way through the new
ideas, trying to understand it instead of denying it, then re-apply my
workflow. If it doesn't fit, throw either that away, or the software.
Probably one of many reasons why I chose Gentoo, although I sometimes
play with the idea of trying Fedora. But in the end I would miss much
of the freedom I currently have (and make use of).


-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd (user) and (sd-pam) (user) processes in login shell

2015-12-21 Thread Kai Krakow
Am Mon, 21 Dec 2015 21:43:24 -0500
schrieb Mike Gilbert <flop...@gentoo.org>:

> On Mon, Dec 21, 2015 at 7:36 PM, Kai Krakow <hurikha...@gmail.com>
> wrote:
> > Am Tue, 8 Dec 2015 01:36:01 +0200
> > schrieb Mantas Mikulėnas <graw...@gmail.com>:
> >
> >> What uid does "oracle" have – is it within the system account range
> >> (usually 1–999) or user account (1000–)? I wonder if it's the
> >> latter, which would mean systemd-logind would clean up various
> >> things like IPC on logout... (see logind.conf)
> >
> > Is this hard-coded in systemd (uid 0..999 and 1000+) or is it read
> > from login.defs?
> >
> > Because I cannot find anything related to it in logind.conf which
> > leads me to the assumption your reference was about RemoveIPC and
> > friends only...
> 
> I rather doubt the numeric value of the oracle UID has anything to do
> with the problem you are having.
> 
> With systemd, you really cannot start daemons from an interactive
> shell. Rather, you need to define a service unit, and call "systemctl
> start" to start long-running daemons.

I think we are talking different here. My question is a spin-off of the
OP.

Mantas actually made the connection between user and system uid range
to systemd behavior. I just wondered, if this is:

  [_] an assumption based on guessing (don't put a cross here)
  [_] hard-coded which personally I'd find surprising
  [_] configurable and I didn't find the knob

But putting one and one together, your answer means (to the OP):

  Don't start daemons directly from a shell and exit. Systemd will blast
  them away. Defined behavior.

Yes, it won't work.

-- 
Regards,
Kai

Replies to list-only preferred.


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-nspawn: cannot join existing macvlan

2015-06-19 Thread Kai Krakow
Lennart Poettering lenn...@poettering.net schrieb:

 On Sat, 30.05.15 19:55, Kai Krakow (hurikha...@gmail.com) wrote:
 
 The next issue with your argument is: AFAIR nspawn doesn't create a
 macvlan interface based on the machine name. You have to pass the name of
 a physical interface which transports this macvlan. The man page at least
 states that you use an existing physical interface:
 
 True, I was a bit confused there...

:-) Fine. I thought I was totally wrong.

 So your assumption about macvlan seems to be incorrect. The other network
 types may be based off the machine name but it doesn't work this way with
 macvlan.
 
 Yeah, nspawn creates a n interface mv-foo from a network interface
 foo on the host.

Yes, it creates it on the host. In the guest, AFAIR (I cannot currently try 
this), it creates host0 as interface.

Correct me if I'm wrong, but I see macvlan as a sort of peer-to-peer level2 
LAN. The host endpoint is mv-foo, each guest has its own endpoint host0, 
spanning a virtual switch accross all peers, thus between mv-foo and each 
host0.

 I think the logic is wrong here in systemd-nspawn. Instead of trying to
 create the host-side macvlan itself it should insist of it being there
 already (to have one well-defined state to start with, and only
 optionally create it by itself). Then, it can join multiple machines to
 the same macvlan.
 
 I don't grok this?
 
 the same macvlan?

Well, the level2 peer-to-peer LAN...

So, in this context, mv-foo should only be created once. Successive guests 
should only be joined to the existing macvlan.

 I have the suspicion that the confusion here stems from the fact that
 nspawn creates the macvlan iface on the host first, then moves it into
 the container. but if you already have an iface by that name on the
 host, then it cannot create the macvlan under that name.

I don't think this is how it worked as far as I remember, but as already 
pointed out: I still have to try that again. Currently my setup refuses to 
run the machines, I need to reconfigure the system first to get one machine 
up and running.

In this context: I think when it worked, it created mv-foo on the host (so 
you are true here), but it won't move it into the container. It creates a 
companion device there called host0. This is a level2 peer-to-peer network 
in the kernel. So maybe host0 is created in the host, then moved into the 
container - I'm not sure. Other peers could be joined.

The mv-foo interface is a virtual MAC address on the host. If you created it 
manually, you would join more virtual interfaces to the physical interface, 
i.e. host0 from the container.

Each peer interface can communicate with the others but not with the 
physical interface directly, except your switch has packet mirroring 
capabilities and would send packages back to the port they originated from - 
this is usually not encouraged by the ethernet switch specification.

The kernel's MACVLAN implementation won't pass packets to the physical 
interface directly but always through the medium connecting to the switch, 
and the switch won't pass it back on the same physical port by sane 
reasoning. However, the kernel would pass packets between MACVLAN peers it 
locally knows without touching the physical interface. The physical 
interface is only a transport medium for non-local (from view of MACVLAN 
known MACs) packets.

To overcome this issue, I need to configure mv-foo to receive my DHCP lease 
instead of the physical interface. Now each peer can communicate with my LAN 
and each other MACVLAN peer of my physical interface (which now includes my 
host mv-foo on layer3). The ḾAC address of the physical interface is more or 
less unused.

 I figure we can fix that by creating the iface under a random name
 first on the host, then move it into the container, and then rename it
 to the right name afterwarads.

The problem is with the interface that stays in the host, not with the 
interface in the container. This fix may be for a second problem I did not 
yet observe.

 A work-around would be to name the .netdev iface of yours something
 else than mv-enp5s0, call it waldi or so, so that it doesn't
 conflict with the name for the contaainer in the short time-frame that
 the iface nspawn creates exists on the host...

I need to create this manually with networkd and configure it as DHCP client 
for the above reasons. Otherwise my host communicates through the physical 
interface foo instead of mv-foo, which effectively disables 
communication with the MACVLAN peers for the above outlined reasons.

 Can you verify if such a change fixes your issue? If so, we can
 randomoize the name initially, as sugegsted above.

I'll first restore a configuration which gets one container up and running 
again with working MACVLAN, then we can figure out where the problem is. I 
somehow believe your guess about the source of the issue is currently not 
quite right. Such a machine could communicate with my router

Re: [systemd-devel] systemd-nspawn: cannot join existing macvlan

2015-05-30 Thread Kai Krakow
Lennart Poettering lenn...@poettering.net schrieb:

 On Mon, 25.05.15 16:26, Kai Krakow (hurikha...@gmail.com) wrote:
 
 Lennart Poettering lenn...@poettering.net schrieb:
 
  On Fri, 08.05.15 20:53, Kai Krakow (hurikha...@gmail.com) wrote:
  
   # systemd-nspawn -b --link-journal=try-guest
   # --network-macvlan=enp4s0 --
   bind=/usr/portage --bind-ro=/usr/src --machine=test
   Spawning container test on /var/lib/machines/test.
   Press ^] three times within 1s to kill container.
   Failed to add new macvlan interfaces: File exists
   
   I still don't think that systemd-nspawn should insist on creating
   the host- side macvlan bridge and fail, if it cannot. It should just
   accept that it is already there.
  
  My findings show that it actually does accept this case. But I had to
  explicitly order the machines after network.target to successfully
  start at boot time.
  
  I changed git now to order nspawn units by default after
  network.target now. In the long run we should replace this by calling
  into networkd though, without waiting for all networking, for
  whatever that means...
 
 Well, now, with v220, we are back to the original problem:
 
 machines # systemd-nspawn --boot --link-journal=try-guest --machine=test
 -- network-macvlan=enp5s0
 Spawning container test on /var/lib/machines/test.
 Press ^] three times within 1s to kill container.
 Failed to add new macvlan interfaces: File exists
 
 If mv-enp5s0 is already there (maybe by means of another machine), it no
 longer starts any other machine on the same macvlan. I don't think this
 is how it should work.
 
 Well, there can only be one machine with the same name, and we use
 that name in the macvlan interface name. Please assign different names
 to your containers so that they will also get differently named
 macvlan names, and all should be good.

But what is the purpose of macvlan if you cannot merge different containers 
into the same LAN segment? One feature of macvlan is that all virtual MAC 
addresses can communicate with each other. That's why I also joined my host 
machine into that macvlan (by creating it with networkd).

The next issue with your argument is: AFAIR nspawn doesn't create a macvlan 
interface based on the machine name. You have to pass the name of a physical 
interface which transports this macvlan. The man page at least states that 
you use an existing physical interface:

--network-macvlan=
Create a macvlan interface of the specified Ethernet network
interface and add it to the container. A macvlan interface is a
virtual interface that adds a second MAC address to an existing
physical Ethernet link. The interface in the container will be named
after the interface on the host, prefixed with mv-. Note that
--network-macvlan= implies --private-network. This option may be
used more than once to add multiple network interfaces to the
container.

Trying to nspawn a macvlan without giving a physical device results in no 
such device:

# systemd-nspawn --boot --link-journal=try-guest --machine=gentoo-mysql-base 
--network-macvlan=
systemd-nspawn: option '--network-macvlan' requires an argument

# systemd-nspawn --boot --link-journal=try-guest --machine=test --network-
macvlan=test
Spawning container test on /var/lib/machines/test.
Press ^] three times within 1s to kill container.
Failed to resolve interface test: No such device

So your assumption about macvlan seems to be incorrect. The other network 
types may be based off the machine name but it doesn't work this way with 
macvlan.

So I wonder what the purpose of macvlan is if you can only create one 
machine using macvlan on my single physical link - and that's it?

I think the logic is wrong here in systemd-nspawn. Instead of trying to 
create the host-side macvlan itself it should insist of it being there 
already (to have one well-defined state to start with, and only optionally 
create it by itself). Then, it can join multiple machines to the same 
macvlan.

I'm using the following networkd config on the host to define this well-
defined state, enabling the host to also communicate through macvlan:

# cat /etc/systemd/network/10-macvlan.{netdev,network}
[NetDev]
Name=mv-enp5s0
Kind=macvlan

[MACVLAN]
Mode=bridge
#
[Match]
Name=en*

[Network]
MACVLAN=mv-enp5s0
LinkLocalAddressing=no
DHCP=no

If I remove this I am able to start one of the containers, but not two or 
more. If I use this I am not able to start any macvlan container.

-- 
Replies to list only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] resolved: Assertion 'n 0' failed

2015-05-25 Thread Kai Krakow
Lennart Poettering lenn...@poettering.net schrieb:

 On Sat, 18.04.15 17:38, Kai Krakow (hurikha...@gmail.com) wrote:
 
 Hello!
 
 Sometimes I'm seeing messages like this:
 
 [ 5780.379921] systemd-resolved[685]: Assertion 'n  0' failed at
 /var/tmp/portage/sys-apps/systemd-219-
 r2/work/systemd-219/src/resolve/resolved-dns-answer.c:28, function
 dns_answer_new(). Aborting.
 [ 5780.621865] systemd-resolved[7396]: Assertion 'n  0' failed at
 /var/tmp/portage/sys-apps/systemd-219-
 r2/work/systemd-219/src/resolve/resolved-dns-answer.c:28, function
 dns_answer_new(). Aborting.
 
 Should be fixed in git. Please verify!

Consider this as closed then unless I report back. This problem wasn't 
clearly reproducable for me. It only hit me sometimes.

-- 
Replies to list only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Rebooting systemd-nspawn container results in shutdown

2015-05-25 Thread Kai Krakow
Lennart Poettering lenn...@poettering.net schrieb:

[...]

 systemd-219 on the host, 218 in the container.
 
 This is fixed in git since a while now, please test.

Yes, works for me in v220. Thanks.

-- 
Replies to list only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-nspawn: cannot join existing macvlan

2015-05-25 Thread Kai Krakow
Lennart Poettering lenn...@poettering.net schrieb:

 On Fri, 08.05.15 20:53, Kai Krakow (hurikha...@gmail.com) wrote:
 
  # systemd-nspawn -b --link-journal=try-guest --network-macvlan=enp4s0
  # --
  bind=/usr/portage --bind-ro=/usr/src --machine=test
  Spawning container test on /var/lib/machines/test.
  Press ^] three times within 1s to kill container.
  Failed to add new macvlan interfaces: File exists
  
  I still don't think that systemd-nspawn should insist on creating the
  host- side macvlan bridge and fail, if it cannot. It should just accept
  that it is already there.
 
 My findings show that it actually does accept this case. But I had to
 explicitly order the machines after network.target to successfully start
 at boot time.
 
 I changed git now to order nspawn units by default after
 network.target now. In the long run we should replace this by calling
 into networkd though, without waiting for all networking, for
 whatever that means...

Well, now, with v220, we are back to the original problem:

machines # systemd-nspawn --boot --link-journal=try-guest --machine=test --
network-macvlan=enp5s0
Spawning container test on /var/lib/machines/test.
Press ^] three times within 1s to kill container.
Failed to add new macvlan interfaces: File exists

If mv-enp5s0 is already there (maybe by means of another machine), it no 
longer starts any other machine on the same macvlan. I don't think this is 
how it should work.

-- 
Replies to list only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Booting to systemd in a chroot

2015-05-14 Thread Kai Krakow
JT Olds jto...@xnet5.com schrieb:

 Thanks Lennart,
 
 I tried pivot_root briefly last night after emailing but my initial
 attempt didn't work. Unfortunately I next tried switch_root, which, lol,
 wiped my root partition.
 
 I'll try pivot_root when I get a working computer again.

Since you are reinstalling anyways, I'd suggest trying out btrfs as your 
filesystem. Create separate subvolumes for each OS and you can get rid of 
chrooting anything. Plus, you can share your home subvolume if you like.

Just take care that btrfs is still under fast development, so both OS should 
run a sufficiently recent kernel. At least I'd advice you to use the oldest 
kernel of the OS set to format btrfs, and not enable incompatible features 
later.

The cool factor is, BTW, that you could deduplicate all your OS 
installations so file systems blocks become shared that are equal throughout 
the OS set, resulting in lower disk space usage.

 Thanks again
 -JT
 
 On Thu, May 14, 2015, 10:25 AM Lennart Poettering lenn...@poettering.net
 wrote:
 
 On Thu, 14.05.15 06:12, JT Olds (jto...@xnet5.com) wrote:

  Hey folks!
 
  I'm getting lots of systemd errors like
  `Failed at step NAMESPACE spawning /usr/lib/rtkit/rtkit-daemon: Invalid
  argument` and just wondering what I'm doing wrong.
 
  For background: I have a linux computer that's running Debian Wheezy. I
  want to install and dual boot Jessie, but without creating a new
 partition,
  so I want to do it in a chroot (cause why not, I should be able to,
  right?).

 Sorry, but this simply cannot work: a chroot() is too weak, and
 doesn't mix well with file system namespacing (which triggers the
 errors you are seeing). Proper file system namespacing hides the fact
 pretty well that things are namespaced, but chroot does
 not. Especially if you then mix namespacing and chroot things become
 ugly...

 Hence, please do not use chroot for what you are trying to do. Please
 either use namespacing (i.e. mount --move) or privot_root() for this.

  I ran `debootstrap jessie /jessie` and got a full Jessie
  installation in that subfolder (via tasksel and everything). I then
 edited
  GRUB to have a menu option that boots linux like this:
 
  linux /jessie/vmlinuz root=UUID=uid rw init=/jessie/chrootinit
  initrd /jessie/initrd.img
 
  Last, I created chrootinit that wraps systemd:
 
  #!/bin/bash
  exec chroot /jessie /sbin/init $@

 This should work fine if you use pivot_root instead of chroot. Both
 tools are part of util-linux.

 Lennart

 --
 Lennart Poettering, Red Hat

-- 
Replies to list only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-nspawn: cannot join existing macvlan

2015-05-08 Thread Kai Krakow
Kai Krakow hurikha...@gmail.com schrieb:

 Kai Krakow hurikha...@gmail.com schrieb:
 
 Hello again!

And again...

 Amended below...
 
 I'm not sure about this but I suspect that I cannot start a second nspawn
 container with --network-macvlan when another nspawn instance has created
 it before:
 
 # systemd-nspawn -b --network-macvlan=enp4s0
 Spawning container gentoo-mysql-base on
 /var/lib/machines/gentoo-mysql-base. Press ^] three times within 1s to
 kill container. Failed to add new macvlan interfaces: File exists
 
 To my surprise it works when adding machines to machines.target. While
 you cannot start them through means of systemd because of the same error,
 it works during boot of the whole system: All containers boot up properly
 - but stop one and you cannot restart it.
 
 So it looks like there's an unintentional race condition during boot
 which allows to create this interface but when the system is up, it no
 longer works because the race condition is no longer present.
 
 systemd-nspawn should probably just allow joining existing macvlan
 bridges. I would fix it in the code but I don't know the implications why
 this check is in there in the first place.
 
 A second fix should maybe do something about such race conditions if it
 is such one. I suspect there are cases where the interface presence check
 makes actually sense.
 
 I installed something which is called a stable v219 snapshot, I could not
 find out which changes are included, tho:
 
 *systemd-219_p112 (26 Apr 2015)
 
   26 Apr 2015; Mike Gilbert flop...@gentoo.org +systemd-219_p112.ebuild:
   Add a snapshot from the v219-stable branch upstream.
 
 The behavior described above has changed with this snapshot: Machines
 using macvlan no longer start, even not a boot-up (which worked before).
 
 The error is still the same:
 
 # systemd-nspawn -b --link-journal=try-guest --network-macvlan=enp4s0 --
 bind=/usr/portage --bind-ro=/usr/src --machine=test
 Spawning container test on /var/lib/machines/test.
 Press ^] three times within 1s to kill container.
 Failed to add new macvlan interfaces: File exists
 
 I still don't think that systemd-nspawn should insist on creating the
 host- side macvlan bridge and fail, if it cannot. It should just accept
 that it is already there.

My findings show that it actually does accept this case. But I had to 
explicitly order the machines after network.target to successfully start at 
boot time.

It looks fine so far. The stable snapshot of v219 mentioned above seems to 
actually have fixed a few issues.

-- 
Replies to list only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-nspawn: cannot join existing macvlan

2015-05-03 Thread Kai Krakow
Kai Krakow hurikha...@gmail.com schrieb:

Hello again!

Amended below...

 I'm not sure about this but I suspect that I cannot start a second nspawn
 container with --network-macvlan when another nspawn instance has created
 it before:
 
 # systemd-nspawn -b --network-macvlan=enp4s0
 Spawning container gentoo-mysql-base on
 /var/lib/machines/gentoo-mysql-base. Press ^] three times within 1s to
 kill container. Failed to add new macvlan interfaces: File exists
 
 To my surprise it works when adding machines to machines.target. While you
 cannot start them through means of systemd because of the same error, it
 works during boot of the whole system: All containers boot up properly -
 but stop one and you cannot restart it.
 
 So it looks like there's an unintentional race condition during boot which
 allows to create this interface but when the system is up, it no longer
 works because the race condition is no longer present.
 
 systemd-nspawn should probably just allow joining existing macvlan
 bridges. I would fix it in the code but I don't know the implications why
 this check is in there in the first place.
 
 A second fix should maybe do something about such race conditions if it is
 such one. I suspect there are cases where the interface presence check
 makes actually sense.

I installed something which is called a stable v219 snapshot, I could not 
find out which changes are included, tho:

*systemd-219_p112 (26 Apr 2015)

  26 Apr 2015; Mike Gilbert flop...@gentoo.org +systemd-219_p112.ebuild:
  Add a snapshot from the v219-stable branch upstream.

The behavior described above has changed with this snapshot: Machines using 
macvlan no longer start, even not a boot-up (which worked before).

The error is still the same:

# systemd-nspawn -b --link-journal=try-guest --network-macvlan=enp4s0 --
bind=/usr/portage --bind-ro=/usr/src --machine=test
Spawning container test on /var/lib/machines/test.
Press ^] three times within 1s to kill container.
Failed to add new macvlan interfaces: File exists

I still don't think that systemd-nspawn should insist on creating the host-
side macvlan bridge and fail, if it cannot. It should just accept that it is 
already there.

Actually I even created this device in the host with networkd because by 
design macvlan and parent device cannot communicate with each other without 
switch support and won't communicate directly locally either. Thus, you need 
to attach a host-side macvlan device to your physical parent device to 
communicate with the other virtual MAC addresses on the same host, and then 
setup your IP configuration on this device.

Of course one could argue that this is a security feature of nspawn to 
isolate containers and hosts from each other. So maybe, put an option to 
allow nspawn to join an existing macvlan, maybe --network-join-macvlan.

-- 
Replies to list only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] systemd-nspawn --template: should it delete /etc/hostname?

2015-05-01 Thread Kai Krakow
Hello!

If I create a new machine by cloning using systemd-nspawn --template, should 
it remove etc/hostname? It already creates a new machine-id etc, and the 
hostname should probably not be set for a new container in this case, 
regardless of whether the template is a real template or a cloned machine.

Thoughts?

I suppose something similar should be possible for statically configured IP 
addresses as an option, tho I wouldn't know how to implement that because 
systemd-networkd doesn't expect that information at well defined location.

-- 
Replies to list only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] systemd-nspawn: cannot join existing macvlan

2015-05-01 Thread Kai Krakow
Hello!

I'm not sure about this but I suspect that I cannot start a second nspawn 
container with --network-macvlan when another nspawn instance has created it 
before:

# systemd-nspawn -b --network-macvlan=enp4s0
Spawning container gentoo-mysql-base on /var/lib/machines/gentoo-mysql-base.
Press ^] three times within 1s to kill container.
Failed to add new macvlan interfaces: File exists

To my surprise it works when adding machines to machines.target. While you 
cannot start them through means of systemd because of the same error, it 
works during boot of the whole system: All containers boot up properly - but 
stop one and you cannot restart it.

So it looks like there's an unintentional race condition during boot which 
allows to create this interface but when the system is up, it no longer 
works because the race condition is no longer present.

systemd-nspawn should probably just allow joining existing macvlan bridges. 
I would fix it in the code but I don't know the implications why this check 
is in there in the first place.

A second fix should maybe do something about such race conditions if it is 
such one. I suspect there are cases where the interface presence check makes 
actually sense.

-- 
Replies to list only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] tmpfiles versus tmpwatch

2015-04-29 Thread Kai Krakow
Roger Qiu roger@polycademy.com schrieb:

 I'm planning to use tmpwatch's `fuser` feature.
 
 But I'd prefer to run this simple service using systemd's tmpfiles.
 Does systemd tmpfiles support running `fuser` so that way it won't
 delete any files that have an open file descriptor?
 
 I couldn't see any mention of in the docs and source code
 (https://github.com/systemd/systemd/blob/master/src/tmpfiles/tmpfiles.c).

I don't think it will or ever will but I'm not a dev.

The point is: tmpwatch's fuser feature is IMHO just a countermeasure for 
filesystems mounted with noatime in combination with wrongly behaving 
software which has long living processes opening files in /tmp. That's wrong 
by design.

Such software should put such files in /var/tmp (which is, according to unix 
standards, volatile, too, but would survive reboots and files should stay 
around 30 days without usage) or in /var/{cache,spool,lib}. For /var/cache 
subdirectories you could setup tmpfiles or tmpwatch - whatever is more 
appropriate to you.

Files with very long open times and never being touched in a long time just 
don't belong into /tmp. And if you want to ensure that a file isn't 
accidently deleted too early, don't enable noatime. Use relatime (or maybe 
lazytime from the next kernel versions which is much more posix conform).

-- 
Replies to list only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] man systemd.network question

2015-04-27 Thread Kai Krakow
Hello!

The man page reads:

[MATCH] SECTION OPTIONS
   The network file contains a [Match] section, which determines if a
   given network file may be applied to a given device; and a
   [Network] section specifying how the device should be configured.
   The first (in lexical order) of the network files that matches a
   given device is applied.

What does this exactly mean? Will it process further files or stop 
processing files after a match?

Usually, my experience with unix says, that when files are processed in 
lexical order, settings from earlier files are overridden by settings from 
later files - like e.g. in /etc/env.d

In that sense, it can only mean that the processing stops at the first 
matching files. Otherwise the order of overriding would be reversed from 
expectations.

I think this should be made more clear in the man page, like by The 
processing of files stops at the first match. This would follow example of 
how other projects document behaviour.

-- 
Replies to list only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-nspawn and IPv6

2015-04-27 Thread Kai Krakow
Lennart Poettering lenn...@poettering.net schrieb:

 On Mon, 27.04.15 20:17, Kai Krakow (hurikha...@gmail.com) wrote:
 
 Tomasz Torcz to...@pipebreaker.pl schrieb:
 
  Well, would that enable automatic, correcting routing between the
  container and the host's external network? That's kinda what this all
  is about...
  
  If you have radvd running, it should.  By the way, speaking of NAT
  in context of IPv6 is a heresy.
 
 Why? It's purpose here is not saving some addresses (we have many in
 IPv6), it's purpose is to have security and containment. The services
 provided by the container - at least in my project - are meant to be seen
 as a service of the host (as Lennart pointed out as a possible
 application in another post). I don't want the containers being
 addressable/routable from outside in. And putting a firewall in place to
 counterfeit this is just security by obscurity: Have one configuration
 problem and your firewall is gone and the container publicly available.
 
 The whole story would be different if I'd setup port forwarding
 afterwards to make services from the containers available - but that
 won't be the case.
 
 Sidenote: systemd-nspawn already covers that for ipv4: use the --port=
 switch (or -p).

Yes, I know... And I will certainly find a use-case for that. :-)

But the general design of my project is to put containers behind a reverse 
proxy like nginx or varnish, setup some caching and waf rules, and 
dynamically point incoming web requests to the right container servicing the 
right environment. :-)

I will probably pull performance data through such a port forwarding. But 
for now the testbed is only my desktop system, some months will pass before 
deploying this on a broader basis, it will certainly not start with IPv6 
support (but it will be kept in mind), and I still have a lot of ideas to 
try out.

I even won't need to have IPv6 pass into the host from external networks 
because a proxy will sit inbetween. But it would be nice if containers could 
use IPv6 from inside without having to worry about packets could pass in 
through a public routing rule. I don't like pulling up a firewall before 
everything is settled, tested, and secured. A firewall is only the last 
resort barrier. The same holds true for stuff like fail2ban or denyhosts.

For the time being, I should simply turn off IPv6 inside the container. 
However, I didn't figure out how to prevent systemd-network inside the 
container from doing that.

-- 
Replies to list only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-nspawn and IPv6

2015-04-27 Thread Kai Krakow
Lennart Poettering lenn...@poettering.net schrieb:

 On Mon, 27.04.15 20:08, Kai Krakow (hurikha...@gmail.com) wrote:
 
  Or in other words: ipv6 setup needs some manual networking setup on
  the host.
 
 Or there... Any pointers?
 
 Not really. You have to set up ipv6 masquerading with ip6tables. And
 ensure the containers get ipv6 addresses that are stable enough that
 you can refer to them from the ip6tables rules...

Somehow I thought I would be smart by adding this ExecPostStart script (OTOH 
it's probably just time for bed):

#!/bin/bash
IFNAME=${1:0:14} # %I is passed here
if [ -n $IFNAME ]; then
IP=$(ip -6 addr show dev $IFNAME scope global | awk '/inet6/ { print 
$2 }')
/sbin/sysctl net.ipv6.conf.$IFNAME.forwarding=1
[ -z $IP ] || /sbin/ip6tables -t nat -I POSTROUTING --source $IP 
--dest ::/0
fi
exit 0

and adding Address=::0/126 to the [Network] section of ve-* devices...

But somehow it does not work. If I run it manually after starting the 
container, it does its work. Of course, inside the container, it won't have 
the counterpart address assigned (it works for DHCPv4 only).

If I modify the script to use scope link instead of global, it also works - 
but that won't route anyways.

I suppose, when ExecPostStart is running, the link is just not ready yet. An 
IP address fc00::... will be added to the interface, tho. So at least that 
works.

-- 
Replies to list only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Rebooting systemd-nspawn container results in shutdown

2015-04-27 Thread Kai Krakow
Lennart Poettering lenn...@poettering.net schrieb:

 On Sun, 26.04.15 16:55, Kai Krakow (hurikha...@gmail.com) wrote:
 
 Hello!
 
 I've successfully created a Gentoo container on top of a Gentoo host. I
 can start the container with machinectl, as I can with systemctl start
 
 
 Inside the container (logged in via SSH), I could issue a reboot command.
 But that just results in the container being shutdown. It never comes
 back unless I restart the machine with systemctl or machinectl.
 
 What systemd versions run on the host and in the container?

systemd-219 on the host, 218 in the container.

 if you strace the nspawn process, and then issue the reboot command,
 what are the last 20 lines this generates when nspawn exits? Please
 paste somewhere.

Sure: https://gist.github.com/kakra/d2ff59deec079e027d71

 Is the service in a failed state or so when this doesn't work?

# systemctl status systemd-nspawn@gentoo\\x2dcontainer\\x2dbase.service
● systemd-nspawn@gentoo\x2dcontainer\x2dbase.service - Container gentoo-
container-base
   Loaded: loaded (/etc/systemd/system/systemd-
nspawn@gentoo\x2dcontainer\x2dbase.service; enabled; vendor preset: enabled)
   Active: inactive (dead) since Mo 2015-04-27 19:54:36 CEST; 2min 30s ago
 Docs: man:systemd-nspawn(1)
  Process: 14721 ExecStart=/usr/bin/systemd-nspawn --quiet --keep-unit --
boot --link-journal=try-guest --network-veth --machine=%I --
bind=/usr/portage --bind-ro=/usr/src (code=exited, status=133)
 Main PID: 14721 (code=exited, status=133)
   Status: Terminating...

Apr 27 19:54:36 jupiter systemd-nspawn[14721]: [  OK  ] Reached target 
Unmount All Filesystems.
Apr 27 19:54:36 jupiter systemd-nspawn[14721]: [  OK  ] Stopped target Local 
File Systems (Pre).
Apr 27 19:54:36 jupiter systemd-nspawn[14721]: Stopping Remount Root and 
Kernel File Systems...
Apr 27 19:54:36 jupiter systemd-nspawn[14721]: [  OK  ] Stopped Remount Root 
and Kernel File Systems.
Apr 27 19:54:36 jupiter systemd-nspawn[14721]: [  OK  ] Reached target 
Shutdown.
Apr 27 19:54:36 jupiter systemd-nspawn[14721]: Sending SIGTERM to remaining 
processes...
Apr 27 19:54:36 jupiter systemd-nspawn[14721]: Sending SIGKILL to remaining 
processes...
Apr 27 19:54:36 jupiter systemd-nspawn[14721]: Rebooting.
Apr 27 19:54:36 jupiter systemd[1]: Stopping Container gentoo-container-
base...
Apr 27 19:54:36 jupiter systemd[1]: Stopped Container gentoo-container-base.

 What is the log output of the service then?

Is it sufficient what's included in the status above?

 BTW: Is there a way to automatically bind-mount some directories instead
 of systemctl edit --full the service file and add those?
 
 Currently not, but there's a TODO item to add .nspawn files that may
 be placed next to container directories with additional options.

This is for an imagined use-case where I have multiple similar containers 
running which should all mount the same storage pool (e.g. web pages, just 
each container runs a different PHP).

-- 
Replies to list only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-nspawn and IPv6

2015-04-27 Thread Kai Krakow
Tomasz Torcz to...@pipebreaker.pl schrieb:

 Well, would that enable automatic, correcting routing between the
 container and the host's external network? That's kinda what this all
 is about...
 
 If you have radvd running, it should.  By the way, speaking of NAT
 in context of IPv6 is a heresy.

Why? It's purpose here is not saving some addresses (we have many in IPv6), 
it's purpose is to have security and containment. The services provided by 
the container - at least in my project - are meant to be seen as a service 
of the host (as Lennart pointed out as a possible application in another 
post). I don't want the containers being addressable/routable from outside 
in. And putting a firewall in place to counterfeit this is just security by 
obscurity: Have one configuration problem and your firewall is gone and the 
container publicly available.

The whole story would be different if I'd setup port forwarding afterwards 
to make services from the containers available - but that won't be the case.

Each container has to be in it's own private network (on grouped into a 
private network with selected other containers). Only gateway services on 
the host system (like a web proxy) are allowed to talk to the containers.

-- 
Replies to list only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


  1   2   >