Re: [systemd-devel] systemd-nspawn leaves leftovers in /tmp
This might be due to trying to use systemd-nspawn -x with a raw image inside the btrfs /var/lib/machines volume. It doesn't work in the sense that the container isn't ephemeral, but there's no error message either, and this leftover gets created. If I jump through elaborate hoops to create the container as a btrfs subvolume instead of using the pull-raw one liner, the -x flag works as expected and there is no leftover in /tmp. On Thu, Nov 3, 2016 at 11:54 AM, Lennart Poettering wrote: > On Thu, 03.11.16 11:34, Bill Lipa (d...@masterleep.com) wrote: > >>> I am using systemd-nspawn to run a short lived process in a container. >> This is a fairly frequent operation (once every few seconds). Each >> time systemd-nspawn runs, it leaves a temporary empty directory like >> /tmp/nspawn-root-CPeQjR. These directories don't seem to get cleaned >> up. > > Generally, temporary files like this should not be left around by > commands that exit cleanly. If they do, then that's a bug, please file > a bug. (but first, please retry on the two most current systemd > versions, we only track issues with those upstream). ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] nfs-server.service starts before _netdev iscsi mount completes (required)... how can I fix this?
On Fri, 04.11.16 11:12, c...@endlessnow.com (c...@endlessnow.com) wrote: > > >On Thu, Nov 03, 2016 at 04:01:15PM -0700, c...@endlessnow.com wrote: > >> so I'm using CentOS 7, and we're mounting a disk from our > iSCSI > >> SAN and then we want to export that via NFS. But on a fresh boot > the > >> nfs-server service fails because the filesytem isn't there yet. > Any > >> ideas on how to fix this? > > > >Add RequiresMountsFor=/your/export/path to nfs-server.service > > (first, apologize for the formatting using a very limted web based > i/f) > > I tried creating a nfs-server.service.d directory with a > required-mounts.conf with that line in it and it did not work. > However adding the line directly to the nfs-server.service file did > work. Can't we add this using a nfs-server.service.d directory and > conf file? mkdir -p /etc/systemd/system/nfs-server.service.d/ echo "[Unit]" > /etc/systemd/system/nfs-server.service.d/50-myorder.conf echo "RequiresMountsFor=/foo/bar/baz" >> /etc/systemd/system/nfs-server.service.d/50-myorder.conf systemctl daemon-reload Lennart -- Lennart Poettering, Red Hat ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] Emergency mode if non-critical /etc/fstab entries are missing
On Fri, 04.11.16 16:14, Marc Haber (mh+systemd-de...@zugschlus.de) wrote: > On Thu, Nov 03, 2016 at 10:55:35PM +0100, Lennart Poettering wrote: > > On Mon, 26.09.16 07:02, Marc Haber (mh+systemd-de...@zugschlus.de) wrote: > > > On Mon, Sep 26, 2016 at 10:52:50AM +1300, Sergei Franco wrote: > > > > The emergency mode assumes console access, which requires physical > > > > access, > > > > which is quiet difficult if the machine is remote. > > > > > > It does also assume knowledge of the root password, which is in > > > enterprise environments not often the case. Enterprises usually have > > > root passwords stowed away in a safe, behind a three-headed guard dog, > > > requiring management approval, and > 2 eyes mechanisms, and usually > > > have password-changing processes attached that touch other machines > > > sharign the same root password as well (for example because the root > > > password hash is stamped into the golden image). > > > > > > Many enterprise environments that I know have their processes geared > > > in a way that the root password is not needed in daily operation. > > > Login via ssh key, privilege escalation via sudo. > > > > > > systemd requiring the root password because some tertiary file system > > > doesn't mount is a nuisance for those environments. > > > > > > Some sites have resorted to adding "nofail" to all fstab lines just to > > > find themselves with the next issue since the initramfs of some > > > distributions doesn't know this option yet. > > > > "nofail" has been around as long as fstab has been around really. It's > > not a systemd invention. > > I cannot say anything about that, I don't have any non-systemd > machines left. However, that machines stop booting and require the > root password is a totally new experience for me that came with systemd. Well, some distros ignored the return value of mount -a, we generally try to not to ignore error conditoins, in particular if they might be relevant for security. Lennart -- Lennart Poettering, Red Hat ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] Emergency mode if non-critical /etc/fstab entries are missing
On Thu, Nov 03, 2016 at 10:55:35PM +0100, Lennart Poettering wrote: > On Mon, 26.09.16 07:02, Marc Haber (mh+systemd-de...@zugschlus.de) wrote: > > On Mon, Sep 26, 2016 at 10:52:50AM +1300, Sergei Franco wrote: > > > The emergency mode assumes console access, which requires physical access, > > > which is quiet difficult if the machine is remote. > > > > It does also assume knowledge of the root password, which is in > > enterprise environments not often the case. Enterprises usually have > > root passwords stowed away in a safe, behind a three-headed guard dog, > > requiring management approval, and > 2 eyes mechanisms, and usually > > have password-changing processes attached that touch other machines > > sharign the same root password as well (for example because the root > > password hash is stamped into the golden image). > > > > Many enterprise environments that I know have their processes geared > > in a way that the root password is not needed in daily operation. > > Login via ssh key, privilege escalation via sudo. > > > > systemd requiring the root password because some tertiary file system > > doesn't mount is a nuisance for those environments. > > > > Some sites have resorted to adding "nofail" to all fstab lines just to > > find themselves with the next issue since the initramfs of some > > distributions doesn't know this option yet. > > "nofail" has been around as long as fstab has been around really. It's > not a systemd invention. I cannot say anything about that, I don't have any non-systemd machines left. However, that machines stop booting and require the root password is a totally new experience for me that came with systemd. Greetings Marc -- - Marc Haber | "I don't trust Computers. They | Mailadresse im Header Leimen, Germany| lose things."Winona Ryder | Fon: *49 6224 1600402 Nordisch by Nature | How to make an American Quilt | Fax: *49 6224 1600421 ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] How to make udev not touch my device?
Hello Michal, Michal Privoznik [2016-11-04 8:47 +0100]: > That means that whenever a VM is being started up, libvirtd (our > daemon we have) relabels all the necessary paths that QEMU process > (representing VM) can touch. Does that mean it's shipping an udev rule that does that? Or actually listens to uevents by itself (possibly via libudev) and applies the labels? > However, I'm facing an issue that I don't know how to fix. In some cases > QEMU can close & reopen a block device. However, closing a block device > triggers an event and hence if there is a rule that sets a security > label on a device the QEMU process is unable to reopen the device again. Is that triggering the above libvirtd action (in the daemon via libudev or via an udev rule), or is that something else? > My question is, whet we can do to prevent udev from mangling with our > security labels that we've set on the devices? Sorry for my ignorance, but my question in return is: What's the udev rule that mangles with it in the first place? I don't see any such rule in upstream systemd or in Debian/Ubuntu, but it's of course possible that Fedora ships such a rule via another package. > One of the ideas our lead developer had was for libvirt to set some kind > of udev label on devices managed by libvirt (when setting up security > labels) and then whenever udev sees such labelled device it won't touch > it at all (this could be achieved by a rule perhaps?). Later, when > domain is shutting down libvirt removes that label. But I don't think > setting an arbitrary label on devices is supported, is it? It actually is -- they are called "tags" (TAG+=) and "properties" (ENV{PROPNAME}="foo"), see udev(7). So indeed the most straightforward way would be to tag or set a property on those devices which you want to handle in libvirtd yourself, and then add something like TAG=="libvirtd", GOTO="skip_selinux_context" [... original rule that changes context goes here ..] LABEL="skip_selinux_context" But for further details I need to understand the actual rules involved. Martin -- Martin Pitt| http://www.piware.de Ubuntu Developer (www.ubuntu.com) | Debian Developer (www.debian.org) ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] nfs-server.service starts before _netdev iscsi mount completes (required)... how can I fix this?
>On Thu, Nov 03, 2016 at 04:01:15PM -0700, c...@endlessnow.com wrote: >> so I'm using CentOS 7, and we're mounting a disk from our iSCSI >> SAN and then we want to export that via NFS. But on a fresh boot the >> nfs-server service fails because the filesytem isn't there yet. Any >> ideas on how to fix this? > >Add RequiresMountsFor=/your/export/path to nfs-server.service (first, apologize for the formatting using a very limted web based i/f) I tried creating a nfs-server.service.d directory with a required-mounts.conf with that line in it and it did not work. However adding the line directly to the nfs-server.service file did work. Can't we add this using a nfs-server.service.d directory and conf file? ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] xmodmap gives: No protocol specified
Am 04.11.2016 um 16:26 schrieb Cecil Westerhof: 2016-11-04 15:46 GMT+01:00 Cecil Westerhof : I want to set my own keyboard definitions when they get lost. They sometimes do. The only way of doing this automatically is in a cronjob or a systemd service. I would prefer a systemd service. But for the moment I rewrite the script to be run from cron instead of systemd. That does not work either: in cron I get the same errors. :'-( you *can not* start a GUI application without a GUI and systemd-services nor cron-jobs have any GUI - that's it what you try to do is simply not possible ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] xmodmap gives: No protocol specified
2016-11-04 15:46 GMT+01:00 Cecil Westerhof : > I want to set my own keyboard definitions when they get lost. They > sometimes do. The only way of doing this automatically is in a cronjob > or a systemd service. I would prefer a systemd service. But for the > moment I rewrite the script to be run from cron instead of systemd. That does not work either: in cron I get the same errors. :'-( -- Cecil Westerhof ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] xmodmap gives: No protocol specified
2016-11-04 13:29 GMT+01:00 Mantas Mikulėnas : > On Fri, Nov 4, 2016 at 1:47 PM, Cecil Westerhof > wrote: >> >> I have a script I want to run as a service which uses: >> xmodmap -pk >> I have to define the DISPLAY, so I use: >> export DISPLAY=:0.0 >> >> But this gives: >> export DISPLAY=:0.0 >> xmodmap: unable to open display ':0.0' >> >> When I try the same with at, I do not have this problem. >> >> What is happening here and how can I resolve this? > > > What's happening is that you shouldn't run X11 programs as system services. It worked with xscreensaver: https://www.linkedin.com/pulse/saving-netbook-battery-bash-script-cecil-westerhof > Most of the time, Xlib saying "No protocol specified" means that the X > server rejected the connection attempt due to missing authentication > details. To fix that, either the program needs the path to your Xauthority > file, or the X server needs to be configured to allow all connections by > your UID. The strange thing I use the same user with at and systemd. So what is the difference? But I also tried setting XAUTHORITY with: export XAUTHORITY=~/.Xauthority With ‘xauth info’ I saw that the authority file is: /run/lightdm/cecil/xauthority but using that in the above statement did not help either. > The default location of Xauth information is ~/.Xauthority – if your program > cannot find it, either you need to set $HOME, or your display manager (gdm > &c.) probably stored it elsewhere and you need to set $XAUTHORITY. Either > way, don't run X11 programs as system services. I want to set my own keyboard definitions when they get lost. They sometimes do. The only way of doing this automatically is in a cronjob or a systemd service. I would prefer a systemd service. But for the moment I rewrite the script to be run from cron instead of systemd. -- Cecil Westerhof ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] automount unit that never fails?
On Fri, 04.11.16 09:38, Bjørn Forsman (bjorn.fors...@gmail.com) wrote: > Hi Lennart, > > On 3 November 2016 at 20:19, Lennart Poettering > wrote: > > Your mail does not say in any way what precisely your issue is? > > Did you read the first post? I hope not, because I don't really know > how to describe it more precisely than that :-) > > Below is a copy of the first post. > > When a mount unit fails (repeatedly), it takes the corresponding > automount unit down with it. To me this breaks a very nice property > I'd like to have: > > A mountpoint should EITHER return the mounted filesystem OR return an error. > > As it is now, when the automount unit has failed, programs accessing > the mountpoint will not receive any errors and instead silently access > the local filesystem. That's bad! > > I don't consider using mountpoint(1) or "test > mountpoint/IF_YOU_SEE_THIS_ITS_NOT_MOUNTED" proper solutions, because > they are out-of-band. > > I was thinking of adding Restart=always to the automount unit, but > that still leaves a small timeframe where autofs is not active. So > that's not ideal either. Also, using Restart= implies a proper .mount > unit instead of /etc/fstab, but GVFS continuously activates autofs > mounts unless the option "x-gvfs-hide" is in /etc/fstab. So I'm kind > of stuck with /etc/fstab until that GVFS issue is solved. > > So the question is, what is the reason for the mount unit to take down > the automount? I figure the automount should simply never fail. Consider turning off the start limits of the mount unit if you don't want systemd to give up eventually. Use StartLimitInterval= and StartLimitBurst= in the mount unit file for that. Lennart -- Lennart Poettering, Red Hat ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] xmodmap gives: No protocol specified
On Fri, 04.11.16 12:47, Cecil Westerhof (cldwester...@gmail.com) wrote: > I have a script I want to run as a service which uses: > xmodmap -pk > I have to define the DISPLAY, so I use: > export DISPLAY=:0.0 > > But this gives: > export DISPLAY=:0.0 > xmodmap: unable to open display ':0.0' > > When I try the same with at, I do not have this problem. > > What is happening here and how can I resolve this? This is the systemd mailing list, we don't really have much knowledge about X11, sorry. Lennart -- Lennart Poettering, Red Hat ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] xmodmap gives: No protocol specified
On Fri, Nov 4, 2016 at 1:47 PM, Cecil Westerhof wrote: > I have a script I want to run as a service which uses: > xmodmap -pk > I have to define the DISPLAY, so I use: > export DISPLAY=:0.0 > > But this gives: > export DISPLAY=:0.0 > xmodmap: unable to open display ':0.0' > > When I try the same with at, I do not have this problem. > > What is happening here and how can I resolve this? > What's happening is that you shouldn't run X11 programs as system services. Most of the time, Xlib saying "No protocol specified" means that the X server rejected the connection attempt due to missing authentication details. To fix that, either the program needs the path to your Xauthority file, or the X server needs to be configured to allow all connections by your UID. The default location of Xauth information is ~/.Xauthority – if your program cannot find it, either you need to set $HOME, or your display manager (gdm &c.) probably stored it elsewhere and you need to set $XAUTHORITY. Either way, don't run X11 programs as system services. Allowing connections by UID might be simpler – run `xauth +SI:localuser:$(id -un)` in your X11 startup script. (The full SI:* syntax is described in Xsecurity(7).) -- Mantas Mikulėnas ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] automount unit that never fails?
On Tue, 20 Sep 2016, Bjørn Forsman wrote: Hi systemd developers, My name is Bjørn Forsman and this is my first post to this list. I have a question/issue with the behaviour of (auto)mount units. When a mount unit fails (repeatedly), it takes the corresponding automount unit down with it. To me this breaks a very nice property I'd like to have: A mountpoint should EITHER return the mounted filesystem OR return an error. As it is now, when the automount unit has failed, programs accessing the mountpoint will not receive any errors and instead silently access the local filesystem. That's bad! Hi Bjørn, For what it's worth, I'm not able to reproduce this on Fedora 24, systemd 229. I set up test automount and mount units, gave the mount unit bogus options, and started the automount unit. When accessing the mount point I got ENODEV errors until systemd reached the burst limit on the mount unit, at which point access to the mount point simply blocked completely. At no time did the automount unit ever fail or stop though. Now I did this with just a tmpfs filesystem, but I can't imagine the behaviour would be different with sshfs. So perhaps some regression between systemd 229 and 231? - Michael___ systemd-devel mailing list systemd-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/systemd-devel
[systemd-devel] xmodmap gives: No protocol specified
I have a script I want to run as a service which uses: xmodmap -pk I have to define the DISPLAY, so I use: export DISPLAY=:0.0 But this gives: export DISPLAY=:0.0 xmodmap: unable to open display ':0.0' When I try the same with at, I do not have this problem. What is happening here and how can I resolve this? -- Cecil Westerhof ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] nfs-server.service starts before _netdev iscsi mount completes (required)... how can I fix this?
On Thu, Nov 03, 2016 at 04:01:15PM -0700, c...@endlessnow.com wrote: > so I'm using CentOS 7, and we're mounting a disk from our iSCSI > SAN and then we want to export that via NFS. But on a fresh boot the > nfs-server service fails because the filesytem isn't there yet. Any > ideas on how to fix this? Add RequiresMountsFor=/your/export/path to nfs-server.service -- Tomasz TorczOnce you've read the dictionary, xmpp: zdzich...@chrome.pl every other book is just a remix. ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/systemd-devel
[systemd-devel] nfs-server.service starts before _netdev iscsi mount completes (required)... how can I fix this?
so I'm using CentOS 7, and we're mounting a disk from our iSCSI SAN and then we want to export that via NFS. But on a fresh boot the nfs-server service fails because the filesytem isn't there yet. Any ideas on how to fix this? ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] automount unit that never fails?
Hi Lennart, On 3 November 2016 at 20:19, Lennart Poettering wrote: > Your mail does not say in any way what precisely your issue is? Did you read the first post? I hope not, because I don't really know how to describe it more precisely than that :-) Below is a copy of the first post. When a mount unit fails (repeatedly), it takes the corresponding automount unit down with it. To me this breaks a very nice property I'd like to have: A mountpoint should EITHER return the mounted filesystem OR return an error. As it is now, when the automount unit has failed, programs accessing the mountpoint will not receive any errors and instead silently access the local filesystem. That's bad! I don't consider using mountpoint(1) or "test mountpoint/IF_YOU_SEE_THIS_ITS_NOT_MOUNTED" proper solutions, because they are out-of-band. I was thinking of adding Restart=always to the automount unit, but that still leaves a small timeframe where autofs is not active. So that's not ideal either. Also, using Restart= implies a proper .mount unit instead of /etc/fstab, but GVFS continuously activates autofs mounts unless the option "x-gvfs-hide" is in /etc/fstab. So I'm kind of stuck with /etc/fstab until that GVFS issue is solved. So the question is, what is the reason for the mount unit to take down the automount? I figure the automount should simply never fail. Thoughs? (I'm running NixOS 16.09 with systemd 231, trying to setup robust, lazy sshfs mount.) Best regards, Bjørn Forsman ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/systemd-devel
[systemd-devel] How to make udev not touch my device?
Hey udev developers, I'm a libvirt developer and I've been facing an interesting issue recently. Libvirt is a library for managing virtual machines and as such allows basically any device to be exposed to a virtual machine. For instance, a virtual machine can use /dev/sdX as its own disk. Because of security reasons we allow users to configure their VMs to run under different UID/GID and also SELinux context. That means that whenever a VM is being started up, libvirtd (our daemon we have) relabels all the necessary paths that QEMU process (representing VM) can touch. However, I'm facing an issue that I don't know how to fix. In some cases QEMU can close & reopen a block device. However, closing a block device triggers an event and hence if there is a rule that sets a security label on a device the QEMU process is unable to reopen the device again. My question is, whet we can do to prevent udev from mangling with our security labels that we've set on the devices? One of the ideas our lead developer had was for libvirt to set some kind of udev label on devices managed by libvirt (when setting up security labels) and then whenever udev sees such labelled device it won't touch it at all (this could be achieved by a rule perhaps?). Later, when domain is shutting down libvirt removes that label. But I don't think setting an arbitrary label on devices is supported, is it? Michal ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] Single Start-job remains listed after startup in state waiting ...
Hi, Thx for the reply. > We had an issue with that in the past, please retry with a less ancient > version of systemd, I am pretty sure this was already fixed, but I can't tell > you really when precisely. This is most probably not possible for the project in this state. I'll try to find the fix you are mentioning. Best regards Marko Hoyer Software Group II (ADITG/SW2) Tel. +49 5121 49 6948 -Original Message- From: Lennart Poettering [mailto:lenn...@poettering.net] Sent: Donnerstag, 3. November 2016 20:44 To: Hoyer, Marko (ADITG/SW2) Cc: systemd Mailing List Subject: Re: [systemd-devel] Single Start-job remains listed after startup in state waiting ... On Fri, 28.10.16 14:55, Hoyer, Marko (ADITG/SW2) (mho...@de.adit-jv.com) wrote: > Hello, > > we are observing a weird behavior with systemd 211. > > The issue: > > -After the startup is finished (multi-user.target is > -reached), one single job (typ: start, unit: service) > -remains in the job queue in state waiting We had an issue with that in the past, please retry with a less ancient version of systemd, I am pretty sure this was already fixed, but I can't tell you really when precisely. Lennart -- Lennart Poettering, Red Hat ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/systemd-devel