Re: [systemd-devel] systemd-222 mount issues on CentOS 7

2016-10-05 Thread Lokesh Mandvekar
On Wed, Oct 05, 2016 at 09:38:18AM +0300, Andrei Borzenkov wrote:
> On Wed, Oct 5, 2016 at 8:24 AM, Lokesh Mandvekar <l...@fedoraproject.org> 
> wrote:
> > On Tue, Oct 04, 2016 at 10:28:25AM +0200, Michal Sekletar wrote:
> >> On Tue, Sep 27, 2016 at 5:05 PM, Lokesh Mandvekar
> >> <l...@fedoraproject.org> wrote:
> >> > Now, I can mount these partitions with:
> >> >
> >> > # lvm vgchange -ay
> >> >
> >> > but this still doesn't automount succesfully on a reboot.
> >> >
> >> > Did I miss something here?
> >>
> >> I'd check from emergency shell whether lvm2-pvscan@.service was run.
> >> This instantiated systemd service is responsible for scaning LVM PVs
> >> and auto-activating LVs on them. Note that it is spawned from udev
> >> rules in case when certain conditions are met, e.g. block device is
> >> identified as LVM2_member and udev event doesn't have SYSTEMD_READY=0
> >> property set.
> >
> > Michal, thanks for the reply.
> >
> > What's the correct way to check if lvm2-pvscan@.service was run?
> >
> > I tried:
> >
> > # systemctl status lvm2-pvscan@.service
> > Failed to get properties: Unit name lvm2-pvscan@.service is not valid.
> >
> 
> You need to look for instances of template, not for template itself.
> Unfortunately, the only way to do it is really
> 
> systemctl --all | grep lvm2-pvscan

Thanks Andrei, I tried this and it returns nothing on my machine.
Guess it means none of the instances were run?

> 
> and then check each individual service.
> 
> Extending systemctl to automatically handle (at least, for display
> purposes) template instances may be useful.


-- 
Lokesh
Freenode: lsm5
GPG: 0xC7C3A0DD
https://keybase.io/lsm5


signature.asc
Description: PGP signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-222 mount issues on CentOS 7

2016-10-04 Thread Lokesh Mandvekar
On Tue, Oct 04, 2016 at 10:28:25AM +0200, Michal Sekletar wrote:
> On Tue, Sep 27, 2016 at 5:05 PM, Lokesh Mandvekar
> <l...@fedoraproject.org> wrote:
> > Now, I can mount these partitions with:
> >
> > # lvm vgchange -ay
> >
> > but this still doesn't automount succesfully on a reboot.
> >
> > Did I miss something here?
> 
> I'd check from emergency shell whether lvm2-pvscan@.service was run.
> This instantiated systemd service is responsible for scaning LVM PVs
> and auto-activating LVs on them. Note that it is spawned from udev
> rules in case when certain conditions are met, e.g. block device is
> identified as LVM2_member and udev event doesn't have SYSTEMD_READY=0
> property set.

Michal, thanks for the reply.

What's the correct way to check if lvm2-pvscan@.service was run?

I tried:

# systemctl status lvm2-pvscan@.service
Failed to get properties: Unit name lvm2-pvscan@.service is not valid.

> 
> Also, there has been couple of bugfixes since systemd-222 that are
> maybe related. We backported them to RHEL/CentOS 7 (systemd-219).

Could you please link me to these patches if that's doable?
I see the rpm source contains tons of patches.

Thanks,
-- 
Lokesh
Freenode: lsm5
GPG: 0xC7C3A0DD
https://keybase.io/lsm5


signature.asc
Description: PGP signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] systemd-222 mount issues on CentOS 7

2016-09-27 Thread Lokesh Mandvekar
Hi list,

I've re-built systemd-222 from Fedora dist-git sources for CentOS 7
(needed for rkt). This is part of the work I'm doing to provide rkt
as an rpm for CentOS Virt SIG. The koji build can be found here:
http://cbs.centos.org/koji/buildinfo?buildID=12569

On installation and reboot, I notice that systemd will only mount the
root dir successfully, while other discrete partitions (/home etc.)
will fail to mount.

# systemctl status local-fs.target
● local-fs.target - Local File Systems
   Loaded: loaded (/usr/lib/systemd/system/local-fs.target; static; vendor 
preset: enabled)
   Active: inactive (dead) since Mon 2016-09-19 09:57:51 EDT; 1 weeks 1 days ago
 Docs: man:systemd.special(7)

Sep 19 09:57:51 minato systemd[1]: Stopped target Local File Systems.
Sep 19 09:57:51 minato systemd[1]: Stopping Local File Systems.
Sep 19 09:59:24 minato systemd[1]: Dependency failed for Local File Systems.
Sep 19 09:59:24 minato systemd[1]: local-fs.target: Job local-fs.target/start 
failed with result 'dependency'.
Sep 19 09:59:24 minato systemd[1]: local-fs.target: Triggering OnFailure= 
dependencies.
Warning: Journal has been rotated since unit was started. Log output is 
incomplete or unavailable.

Now, I can mount these partitions with:

# lvm vgchange -ay

but this still doesn't automount succesfully on a reboot.

Did I miss something here?

Thanks,
-- 
Lokesh
Freenode: lsm5
GPG: 0xC7C3A0DD
https://keybase.io/lsm5


signature.asc
Description: PGP signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Docker vs PrivateTmp

2015-01-18 Thread Lokesh Mandvekar
On Sun, Jan 18, 2015 at 11:38:12PM -0500, Lars Kellogg-Stedman wrote:
 On Sun, Jan 18, 2015 at 08:50:35PM -0500, Colin Walters wrote:
  On Sat, Jan 17, 2015, at 11:02 PM, Lars Kellogg-Stedman wrote:
   Hello all,
   
   With systemd 216 on Fedora 21 (kernel 3.17.8), I have run into an odd
   behavior concerning the PrivateTmp directive, and I am looking for
   help identifying this as:
   
   - Everything Is Working As Designed, Citizen
   - A bug in Docker (some mount flag is being set incorrectly?)
  
  This should be fixed by:
  http://pkgs.fedoraproject.org/cgit/docker-io.git/commit/?id=6c9e373ee06cb1aee07d3cae426c46002663010d
  
  i.e. having docker.service use MountFlags=private, so its mounts
  aren't visible to other processes.
 
 Colin,
 
 Thanks for the pointer.
 
 It seems as if using MountFlags=private is going to cause a new set of
 problems:
 
 Imagine that I am a system administrator using Docker to containerize
 services.  I want to serve set up a webserver container on my Docker
 host, so I mount the web content from a remote server:
 
 mount my-fancy-server:/vol/content /content
 
 And then expose that as a Docker volume:
 
 docker run -v /content:/content webserver
 
 This will fail mysteriously, because with MountFlags=private, the
 mount of my-fancy-server:/vol/content on /content won't be visible to
 Docker containers.  I will spend fruitless hours trying to figure out
 why such a seemingly simple operation is failing.
 
 I think we actually want MountFlags=slave, which will permit mounts
 from the global namespace to propagate into the service namespace
 without permitting propagation in the other direction.  It seems like
 this would the Least Surprising behavior.

Copying dwalsh


-- 
Lokesh
Freenode, OFTC: lsm5
GPG: 0xC7C3A0DD


pgpTr9Yj9xv1t.pgp
Description: PGP signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel