Hi folks,
I am looking for help how to track down a delay of 90 secs at
shutdown time. I suspect that there is a problem with umounting
the /home directory tree (mounted via NFS).
Apparently it comes up after journal has been stopped, so I
tried the procedure described on
Hi folks,
I've got a device-busy-problem with /home, mounted via NFS.
Shutdown of the host takes more than 180 secs. See attached
log file.
Apparently the umount of /home at 81925.154995 failed, (device
busy, in my case it was a lost gpg-agent). This error was
ignored, the NFS framework was
On 4/5/19 8:45 AM, Mantas Mikulėnas wrote:
Normally I'd expect user sessions (user-*.slice, session-*.scope,
user@*.service) to be killed before mount units are stopped; I wonder how
random gpg-agent processes have managed to escape that. (Actually, doesn't
Debian now manage gpg-agent via
On 4/5/19 12:21 PM, Lennart Poettering wrote:
On Fr, 05.04.19 11:53, Harald Dunkel (harald.dun...@aixigo.de) wrote:
This is a VNC session, started via crontab @reboot.
IIRC debian/ubuntu do not have pam-systemd in their PAM configuration
for cron, which means these services are not tracked
Hi Lennart,
On 4/5/19 10:28 AM, Lennart Poettering wrote:
For some reason a number of X session processes stick around to the
very end and thus keep your /home busy.
[82021.052357] systemd-shutdown[1]: Sending SIGKILL to remaining processes...
[82021.101976] systemd-shutdown[1]: Sending
See attachment. Hope this helps
Harri
1 epoll_wait(4, [{EPOLLIN, {u32=3589379376, u64=94720503158064}}], 36, -1)
= 1
1 clock_gettime(CLOCK_BOOTTIME, {tv_sec=4562316, tv_nsec=425983895}) = 0
1 recvmsg(29, {msg_name=NULL, msg_namelen=0,
msg_iov=[{iov_base="WATCHDOG=1\n",
On 8/12/20 10:32 AM, Ulrich Windl wrote:
As you found out the details already, maybe you could have added some strace
output, especially after the kill() is returning...
See attachment. Hope this helps
Harri
44504 execve("/bin/systemctl", ["systemctl", "kill", "-s", "HUP",
On 8/11/20 2:27 PM, Lennart Poettering wrote:
Can you run systemctl with SYSTEMD_LOG_LEVEL debug? Anything
interesting in the debug output it generates then? I wonder where the
I/O error comes from...
Sure:
# export SYSTEMD_LOG_LEVEL=debug
# systemctl kill -s HUP rsyslog.service
Bus n/a:
On 8/12/20 1:03 PM, Harald Dunkel wrote:
See attachment. Hope this helps
Harri
PS:
# ls -al /sys/fs/cgroup/unified/system.slice/rsyslog.service
total 0
drwxr-xr-x 2 root root 0 Jun 20 17:40 .
drwxr-xr-x 53 root root 0 Aug 12 13:30 ..
-r--r--r-- 1 root root 0 Aug 12 13:05 cgroup.controllers
On 8/13/20 9:05 AM, Andrei Borzenkov wrote:
systemd should really clearly log this (invalid PID and and in which
cgroup it was). Returning generic error message without any indication
what caused this error is not useful at all.
Do you think it would be reasonable to silently ignore the PID =
On 8/12/20 2:16 PM, Andrei Borzenkov wrote:
12.08.2020 14:03, Harald Dunkel пишет:
See attachment. Hope this helps
Harri
1 openat(AT_FDCWD,
"/sys/fs/cgroup/unified/system.slice/rsyslog.service/cgroup.procs",
O_RDONLY|O_CLOEXEC) = 24
1 read(24, "0\n1544456\n&
On 8/13/20 11:07 AM, Lennart Poettering wrote:
No! It's a bug. Not in systemd, but LXC. But generating errors in such
a borked setup is *good*, not bad, and certainly nothing to hide.
Surely its not a bug in systemd, but ignoring unreasonable data (maybe with
a warning, if necessary) has a
Hi folks,
sending a HUP to rsyslog using the "systemd way" gives me an error:
# systemctl kill -s HUP rsyslog.service
Failed to kill unit rsyslog.service: Input/output error
rsyslog receives the signal, but the exit value of systemctl indicates
an error, affecting the logrotate
On 8/13/20 11:03 AM, Lennart Poettering wrote:
Is it possible the container and the host run in the very same cgroup
hierarchy?
If that's the case (and it looks like it): this is not
supported. Please file a bug against LXC, it's very clearly broken.
FYI:
On 2022-01-04 16:14:16, Andrei Borzenkov wrote:
You have two interfaces which export the same onboard interface index.
There is not much udev can do here; the only option is to disable
onboard interface name policy. The attributes that are used by udev
are "acpi_index" and "index". Check
On 2022-01-05 13:50:29, Mantas Mikulėnas wrote:
On Wed, Jan 5, 2022 at 9:46 AM Harald Dunkel
AFAICS the kernel of today still assigns the "legacy" interface names,
which are renamed by udev later. I would suggest to improve conflict
It does, yes, but note this part:
Jan 0
On 2022-01-05 11:17:20, Martin Wilck wrote:
This is default behavior. To disable it, you need to use
"net.ifnames=0". If you see the same value multiple times for either
"acpi_index" or "index", it'd be a firmware problem. I suppose it can
happen that one device has acpi_index==1 and another
Hi folks,
after the upgrade from Buster to Bullseye (including the migration from
sysv init to systemd) the network interface names were messed up on
several hosts. Apparently udev stumbles over a naming conflict:
# journalctl -b | egrep -i e1000e\|igb\|rename\|eth\enp\|eno
Jan 03 11:30:14
On 2022-01-05 21:48:11, Michael Biebl wrote:
Am Mi., 5. Jan. 2022 um 13:50 Uhr schrieb Mantas Mikulėnas :
It does, yes, but note this part:
Jan 03 11:30:14 nasl002b.example.com kernel: igb :02:00.2 eth4: renamed
from eth2
Jan 03 11:30:14 nasl002b.example.com kernel: igb :02:00.3 eth5:
On 2022-01-06 13:23:37, Michael Biebl wrote:
Am Do., 6. Jan. 2022 um 10:00 Uhr schrieb Mantas Mikulėnas :
Grep your entire /etc for those interface names (starting with /etc/udev), find
out where they're defined, and remove them
Please also make sure to rebuild your initramfs after doing
Hi folks,
systemctl status does a nice job showing LXC containers and their
process trees, but I wonder if it could show memory and cpu limits,
memory utilization, swap, etc as well, even if the LXC or docker or
whatever container wasn't started by systemd? cgroup1 and unified,
if possible.
I
21 matches
Mail list logo