[systemd-devel] more verbose debug info than systemd.log_level=debug?

2017-03-16 Thread Chris Murphy
I've got a Fedora 22, 23, 24, 25 bug where systemd offline updates of
kernel results in an unbootable system when on XFS only (/boot is a
directory), the system boots to a grub menu. The details of that are
in this bug's comment:

https://bugzilla.redhat.com/show_bug.cgi?id=1227736#c39

The gist of that is the file system is dirty following offline update,
and the grub.cfg is 0 length. If the fs is mounted with a rescue
system, the XFS journal is replayed and cleans things up, now there is
a valid grub.cfg, and at the next reboot there is a grub menu as
expected with the newly installed kernel.

That bug is on baremetal for another user, but I've reproduced it in a
qemu-kvm where I use boot parameters systemd.log_level=debug
systemd.log_target=console console=ttyS0,38400 and virsh console to
capture what's going on during the offline update that results in the
dirty file system.

What I get is more confusing than helpful:



Sending SIGTERM to remaining processes...
Sending SIGKILL to remaining processes...
Process 304 (plymouthd) has been marked to be excluded from killing.
It is running from the root file system, and thus likely to block
re-mounting of the root file system to read-only. Please consider
moving it into an initrd file system instead.
Unmounting file systems.
Remounting '/tmp' read-only with options 'seclabel'.
Unmounting /tmp.
Remounting '/' read-only with options 'seclabel,attr2,inode64,noquota'.
Remounting '/' read-only with options 'seclabel,attr2,inode64,noquota'.
Remounting '/' read-only with options 'seclabel,attr2,inode64,noquota'.
All filesystems unmounted.
Deactivating swaps.
All swaps deactivated.
Detaching loop devices.
device-enumerator: scan all dirs
  device-enumerator: scanning /sys/bus
  device-enumerator: scanning /sys/class
All loop devices detached.
Detaching DM devices.
device-enumerator: scan all dirs
  device-enumerator: scanning /sys/bus
  device-enumerator: scanning /sys/class
All DM devices detached.
Spawned /usr/lib/systemd/system-shutdown/mdadm.shutdown as 8408.
/usr/lib/systemd/system-shutdown/mdadm.shutdown succeeded.
system-shutdown succeeded.
Failed to read reboot parameter file: No such file or directory
Rebooting.
[   52.963598] Unregister pv shared memory for cpu 0
[   52.965736] Unregister pv shared memory for cpu 1
[   52.966795] sd 1:0:0:0: [sda] Synchronizing SCSI cache
[   52.991220] reboot: Restarting system
[   52.993119] reboot: machine restart


1. Why are there three remount read-only entries? Are these failing?
These same three entries happen when the file system is Btrfs, so it's
not an XFS specific anomaly.

2. All filesystems unmounted. What condition is required to generate
this message? I guess I'm asking if it's reliable. Or if it's possible
after three failed read-only remounts that systemd gives up and claims
the file systems are unmounted, and then reboots?

There is an XFS specific problem here, as the dirty fs problem only
happens on XFS; the file system is clean if it's ext4 or Btrfs.
Nevertheless it looks like something is holding up the remount, and
there's no return value from umount logged.

Is there a way to get more information during shutdown than this? The
question at this point is why is the XFS volume dirty at reboot time,
but there's not much to go on, as I get all the same console messages
for ext4 and Btrfs which don't have a dirty fs at reboot following
offline update.


Thanks,


-- 
Chris Murphy
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] How to set environment variable that is available for later services/shell

2017-03-16 Thread Sam Ravnborg
On Wed, Mar 15, 2017 at 12:21:49AM +, Zbigniew Jędrzejewski-Szmek wrote:
> On Sat, Mar 11, 2017 at 06:14:17PM +0100, Sam Ravnborg wrote:
> > Hi all.
> > 
> > How can we set environment variables so they are available for everyone?
> > 
> > 
> > Background:
> > 
> > On an embedded target we use QT applications.
> > 
> > To start our QT application we need to set environment variables:
> > 
> > QT_QPA_GENERIC_PLUGINS=tslib:/dev/input/event0 
> > QT_QPA_PLATFORM=linuxfb:fb=/dev/fb0:mmsize=154x86:size=800x480 
> > ...
> > 
> > 
> > For now we use:
> > [Service]   
> > 
> > EnvironmentFile=/etc/foo.conf
> > 
> > This has two drawbacks:
> > 1) We cannot start the QT application from the
> >command-line (unless we source the .conf file)
> > 2) We need to use different .conf files (thus different
> >.service files) depending on the HW configuration (display type/size)
> > 
> > So we are looking for a way to set the environment variables early
> > such that they are available in the services launced later,
> > and they are available in the shell.
> > The actual system configuration will determine the values and
> > it may differ from boot to boot.
> 
> There are a few different mechanisms, and which one is the best
> fit depends on when the information about the content of those
> variables is known and how widely it should be disseminated.
> 
> There are at least the following possibilities:
> 1. you could generate a file like /run/qtvars.conf from some service,
> and make sure that this service is started at boot before any
> relevant services (including user sessions), and import it there,
> by doing EnvironmentFile=/run/qtvars.conf.
> 
> The advantage is that this gets run asynchronously during boot,
> so it can wait for hardware to be detected.
> 
> 2. (with systemd 233) you could also create a file like
> /run/environment.d/60-qt.conf with similar contents,
> and variables set therein would be available for services
> started from systemd --user, and possibly some login environments.
> This part is a work in progress, in particular ssh logins do not
> inherit those variables. So this might be a solution in the
> future, but probably not atm.
> 
> It sounds like doing 1. + adding '[ -f /run/qtvars.conf ] && . 
> /run/qtvars.conf'
> somewhere in /etc/profile.d might be your best option.

Hi Zbyszek.

Thanks the the response.
What we ended up with was your solution 1 - with a service that
generates a file in /run/etc/*. The service is "WantedBy=multi-user.target"
and thus will run before any QT applications.

For the shell (which is both serial and ssh) we implemented
a script we located in /etc/profile.d/ which sets and exports relevant
variables. So again more or less as suggested by you.

We (a colleague of mine) implemented this before your reply,
but it was great to be confirmed that what we have is a good solution.

So thanks again,

Sam
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] F25: NAMESPACE spawning: Too many levels of symbolic links

2017-03-16 Thread Reindl Harald



Am 16.03.2017 um 18:21 schrieb Michal Sekletar:

On Thu, Mar 16, 2017 at 4:29 PM, Reindl Harald  wrote:

with systemd-229-18.fc24.x86_64 no problem at all - after upgrade to F25
"/usr/bin/vmware-networks" while this is just a phyiscal file and was not
touched

[root@rh:~]$ rpm -q systemd
systemd-231-14.fc25.x86_64

Mar 16 16:25:23 rh systemd: vmware-vmnet.service: Failed at step NAMESPACE
spawning /usr/bin/vmware-networks: Too many levels of symbolic links
Mar 16 16:25:24 rh systemd: vmware-vmnet.service: Control process exited,
code=exited status=226
Mar 16 16:25:24 rh systemd: Failed to start VMware Virtual Machine Ethernet.
Mar 16 16:25:24 rh systemd: vmware-vmnet.service: Unit entered failed state.
Mar 16 16:25:24 rh systemd: vmware-vmnet.service: Failed with result
'exit-code'.

[root@rh:~]$ stat /usr/bin/vmware-networks
  File: '/usr/bin/vmware-networks'
  Size: 1189920 Blocks: 2328   IO Block: 4096   regular file
Device: 901h/2305d  Inode: 1308258 Links: 1
Access: (0755/-rwxr-xr-x)  Uid: (0/root)   Gid: (0/root)
Access: 2017-03-13 13:50:05.693010420 +0100
Modify: 2017-03-13 13:50:05.734010674 +0100
Change: 2017-03-13 13:50:05.764010860 +0100
 Birth: -

[root@rh:~]$ cat /etc/systemd/system/vmware-vmnet.service
[Unit]
Description=VMware Virtual Machine Ethernet
After=vmware-modules.service
Requires=vmware-modules.service
Before=network.service systemd-networkd.service

[Service]
Type=forking
ExecStart=/usr/bin/vmware-networks --start
ExecStartPost=-/usr/sbin/sysctl -e -w net.ipv4.conf.vmnet8.forwarding=1
ExecStartPost=-/usr/sbin/sysctl -e -w net.ipv4.conf.vmnet8.log_martians=0
ExecStop=/usr/bin/vmware-networks --stop

ReadOnlyDirectories=/usr
InaccessibleDirectories=-/boot
InaccessibleDirectories=-/home
InaccessibleDirectories=-/media
InaccessibleDirectories=-/data
InaccessibleDirectories=-/mnt
InaccessibleDirectories=-/mnt/data
InaccessibleDirectories=-/root

[Install]
WantedBy=multi-user.target


Is any of inaccessible directories actually symlink? If so then I
believe you are hitting,


yes - /data on this machine while it's on others sharing that unit a 
physical directory



https://github.com/systemd/systemd/issues/3867

This is fixed upstream, but patches for this were not backported to
Fedora 25 yet. Here are Fedora bugs mentioning similar symptoms,

https://bugzilla.redhat.com/show_bug.cgi?id=131
https://bugzilla.redhat.com/show_bug.cgi?id=1414157

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] F25: NAMESPACE spawning: Too many levels of symbolic links

2017-03-16 Thread Michal Sekletar
On Thu, Mar 16, 2017 at 4:29 PM, Reindl Harald  wrote:
> with systemd-229-18.fc24.x86_64 no problem at all - after upgrade to F25
> "/usr/bin/vmware-networks" while this is just a phyiscal file and was not
> touched
>
> [root@rh:~]$ rpm -q systemd
> systemd-231-14.fc25.x86_64
>
> Mar 16 16:25:23 rh systemd: vmware-vmnet.service: Failed at step NAMESPACE
> spawning /usr/bin/vmware-networks: Too many levels of symbolic links
> Mar 16 16:25:24 rh systemd: vmware-vmnet.service: Control process exited,
> code=exited status=226
> Mar 16 16:25:24 rh systemd: Failed to start VMware Virtual Machine Ethernet.
> Mar 16 16:25:24 rh systemd: vmware-vmnet.service: Unit entered failed state.
> Mar 16 16:25:24 rh systemd: vmware-vmnet.service: Failed with result
> 'exit-code'.
>
> [root@rh:~]$ stat /usr/bin/vmware-networks
>   File: '/usr/bin/vmware-networks'
>   Size: 1189920 Blocks: 2328   IO Block: 4096   regular file
> Device: 901h/2305d  Inode: 1308258 Links: 1
> Access: (0755/-rwxr-xr-x)  Uid: (0/root)   Gid: (0/root)
> Access: 2017-03-13 13:50:05.693010420 +0100
> Modify: 2017-03-13 13:50:05.734010674 +0100
> Change: 2017-03-13 13:50:05.764010860 +0100
>  Birth: -
>
> [root@rh:~]$ cat /etc/systemd/system/vmware-vmnet.service
> [Unit]
> Description=VMware Virtual Machine Ethernet
> After=vmware-modules.service
> Requires=vmware-modules.service
> Before=network.service systemd-networkd.service
>
> [Service]
> Type=forking
> ExecStart=/usr/bin/vmware-networks --start
> ExecStartPost=-/usr/sbin/sysctl -e -w net.ipv4.conf.vmnet8.forwarding=1
> ExecStartPost=-/usr/sbin/sysctl -e -w net.ipv4.conf.vmnet8.log_martians=0
> ExecStop=/usr/bin/vmware-networks --stop
>
> ReadOnlyDirectories=/usr
> InaccessibleDirectories=-/boot
> InaccessibleDirectories=-/home
> InaccessibleDirectories=-/media
> InaccessibleDirectories=-/data
> InaccessibleDirectories=-/mnt
> InaccessibleDirectories=-/mnt/data
> InaccessibleDirectories=-/root
>
> [Install]
> WantedBy=multi-user.target


Is any of inaccessible directories actually symlink? If so then I
believe you are hitting,

https://github.com/systemd/systemd/issues/3867

This is fixed upstream, but patches for this were not backported to
Fedora 25 yet. Here are Fedora bugs mentioning similar symptoms,

https://bugzilla.redhat.com/show_bug.cgi?id=131
https://bugzilla.redhat.com/show_bug.cgi?id=1414157

Michal


> ___
> systemd-devel mailing list
> systemd-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/systemd-devel
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] F25: NAMESPACE spawning: Too many levels of symbolic links

2017-03-16 Thread Reindl Harald
and once again sytstemd can't handle it's own restrictions as it was a 
few releases ago the case when /proc and /sys where set as "

ReadOnlyDirectories="

but what is then the nonsense about "Too many levels of symbolic links" 
and why did it work with F22, F23 and F24 unchanged until new systemd 
version appeared with F25?


interesting that other service have no problem with the same 
unit-options


[root@rh:~]$ cat /etc/systemd/system/vmware-vmnet.service
[Unit]
Description=VMware Virtual Machine Ethernet
After=vmware-modules.service
Requires=vmware-modules.service
Before=network.service systemd-networkd.service

[Service]
Type=forking
ExecStart=/usr/bin/vmware-networks --start
ExecStartPost=-/usr/sbin/sysctl -e -w net.ipv4.conf.vmnet8.forwarding=1
ExecStartPost=-/usr/sbin/sysctl -e -w net.ipv4.conf.vmnet8.log_martians=0
ExecStop=/usr/bin/vmware-networks --stop

#ReadOnlyDirectories=/usr
#InaccessibleDirectories=-/boot
#InaccessibleDirectories=-/home
#InaccessibleDirectories=-/media
#InaccessibleDirectories=-/data
#InaccessibleDirectories=-/mnt
#InaccessibleDirectories=-/mnt/data
#InaccessibleDirectories=-/root

[Install]
WantedBy=multi-user.target

Am 16.03.2017 um 16:29 schrieb Reindl Harald:

with systemd-229-18.fc24.x86_64 no problem at all - after upgrade to F25
"/usr/bin/vmware-networks" while this is just a phyiscal file and was
not touched

[root@rh:~]$ rpm -q systemd
systemd-231-14.fc25.x86_64

Mar 16 16:25:23 rh systemd: vmware-vmnet.service: Failed at step
NAMESPACE spawning /usr/bin/vmware-networks: Too many levels of symbolic
links
Mar 16 16:25:24 rh systemd: vmware-vmnet.service: Control process
exited, code=exited status=226
Mar 16 16:25:24 rh systemd: Failed to start VMware Virtual Machine
Ethernet.
Mar 16 16:25:24 rh systemd: vmware-vmnet.service: Unit entered failed
state.
Mar 16 16:25:24 rh systemd: vmware-vmnet.service: Failed with result
'exit-code'.

[root@rh:~]$ stat /usr/bin/vmware-networks
  File: '/usr/bin/vmware-networks'
  Size: 1189920 Blocks: 2328   IO Block: 4096   regular file
Device: 901h/2305d  Inode: 1308258 Links: 1
Access: (0755/-rwxr-xr-x)  Uid: (0/root)   Gid: (0/root)
Access: 2017-03-13 13:50:05.693010420 +0100
Modify: 2017-03-13 13:50:05.734010674 +0100
Change: 2017-03-13 13:50:05.764010860 +0100
 Birth: -

[root@rh:~]$ cat /etc/systemd/system/vmware-vmnet.service
[Unit]
Description=VMware Virtual Machine Ethernet
After=vmware-modules.service
Requires=vmware-modules.service
Before=network.service systemd-networkd.service

[Service]
Type=forking
ExecStart=/usr/bin/vmware-networks --start
ExecStartPost=-/usr/sbin/sysctl -e -w net.ipv4.conf.vmnet8.forwarding=1
ExecStartPost=-/usr/sbin/sysctl -e -w net.ipv4.conf.vmnet8.log_martians=0
ExecStop=/usr/bin/vmware-networks --stop

ReadOnlyDirectories=/usr
InaccessibleDirectories=-/boot
InaccessibleDirectories=-/home
InaccessibleDirectories=-/media
InaccessibleDirectories=-/data
InaccessibleDirectories=-/mnt
InaccessibleDirectories=-/mnt/data
InaccessibleDirectories=-/root

[Install]
WantedBy=multi-user.target

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] F25: NAMESPACE spawning: Too many levels of symbolic links

2017-03-16 Thread Reindl Harald
with systemd-229-18.fc24.x86_64 no problem at all - after upgrade to F25 
"/usr/bin/vmware-networks" while this is just a phyiscal file and was 
not touched


[root@rh:~]$ rpm -q systemd
systemd-231-14.fc25.x86_64

Mar 16 16:25:23 rh systemd: vmware-vmnet.service: Failed at step 
NAMESPACE spawning /usr/bin/vmware-networks: Too many levels of symbolic 
links
Mar 16 16:25:24 rh systemd: vmware-vmnet.service: Control process 
exited, code=exited status=226

Mar 16 16:25:24 rh systemd: Failed to start VMware Virtual Machine Ethernet.
Mar 16 16:25:24 rh systemd: vmware-vmnet.service: Unit entered failed state.
Mar 16 16:25:24 rh systemd: vmware-vmnet.service: Failed with result 
'exit-code'.


[root@rh:~]$ stat /usr/bin/vmware-networks
  File: '/usr/bin/vmware-networks'
  Size: 1189920 Blocks: 2328   IO Block: 4096   regular file
Device: 901h/2305d  Inode: 1308258 Links: 1
Access: (0755/-rwxr-xr-x)  Uid: (0/root)   Gid: (0/root)
Access: 2017-03-13 13:50:05.693010420 +0100
Modify: 2017-03-13 13:50:05.734010674 +0100
Change: 2017-03-13 13:50:05.764010860 +0100
 Birth: -

[root@rh:~]$ cat /etc/systemd/system/vmware-vmnet.service
[Unit]
Description=VMware Virtual Machine Ethernet
After=vmware-modules.service
Requires=vmware-modules.service
Before=network.service systemd-networkd.service

[Service]
Type=forking
ExecStart=/usr/bin/vmware-networks --start
ExecStartPost=-/usr/sbin/sysctl -e -w net.ipv4.conf.vmnet8.forwarding=1
ExecStartPost=-/usr/sbin/sysctl -e -w net.ipv4.conf.vmnet8.log_martians=0
ExecStop=/usr/bin/vmware-networks --stop

ReadOnlyDirectories=/usr
InaccessibleDirectories=-/boot
InaccessibleDirectories=-/home
InaccessibleDirectories=-/media
InaccessibleDirectories=-/data
InaccessibleDirectories=-/mnt
InaccessibleDirectories=-/mnt/data
InaccessibleDirectories=-/root

[Install]
WantedBy=multi-user.target
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel