Re: [systemd-devel] How to set a limit for mounting roofs?

2016-07-09 Thread Chris Murphy
On Fri, Jul 8, 2016 at 12:39 PM, Lennart Poettering
 wrote:

> Well, personally I am not convinced that a policy of "automatic time-based
> degrading" is a good default policy, and that a policy of "require
> manual intervention to proceed with degrading" is a better default
> policy.

I think that's true, otherwise it second guesses the default behavior
of Btrfs when a volume's devices aren't all present.

But conversely right now there is no way to proceed with manual intervention.

>
> That all said, I am sure in many setups such an automatic degrading is
> useful too, but I am also sure that any fancier policies like that
> really shouldn't be implemented in systemd, but via some daemon or so
> shipped in btrfs-progs really.

That's being floated on the Btrfs list also. The caveat is this ends
up looking something like LVM's dmeventd or mdadm --monitor daemon.

-- 
Chris Murphy
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] How to set a limit for mounting roofs?

2016-07-06 Thread Kai Krakow
Am Wed, 6 Jul 2016 06:21:17 +0200
schrieb Lennart Poettering :

> On Mon, 04.07.16 12:32, Chris Murphy (li...@colorremedies.com) wrote:
> 
> > I have a system where I get an indefinite
> > 
> > "A start job is running for dev-vda2.device (xmin ys / no limit)"
> > 
> > Is there a boot parameter to use to change the no limit to have a
> > limit? rd.timeout does nothing. When I use rd.break=pre-mount and
> > use blkid, /dev/vda2 is there and can be mounted with the same mount
> > options as specified with rootflags= on the command line. So I'm not
> > sure why it's hanging indefinitely. I figure with
> > systemd.log_level=debug and rd.debug and maybe rd.udev.debug I
> > should be able to figure out why this is failing to mount.  
> 
> It should be possible to use x-systemd.device-timeout= among the mount
> options in rootflags= to alter the timeout for the device.
> 
> But what precisely do you expect to happen in this case? You'd simply
> choose between "waits forever" and "fails after some time"... A
> missing root device is hardly something you can just ignore and
> proceed...

I think a degraded btrfs is not actually a missing rootfs. Systemd
tries to decide between black and white here - but btrfs also knows
gray. And I don't mean that systemd should incorporate something to
resolve or announce this issue - that's a UI problem: If the device pool
is degraded, it's the UI that should tell the user. I think, some time
in the future btrfs may automatically fall back to degraded mounts just
like software and hardware raid do. Systemd also doesn't decide not to
boot in that case (raid) and wait forever for a device that's not going
to appear. The problem currently is just that btrfs doesn't go degraded
automatically (for reasons that have been outlined in the btrfs list) -
systemd apparently should have a way to work around this. The degraded
case for btrfs is already covered by the fact that you need to supply
degraded to rootflags on the kernel cmdline - otherwise mounting will
fail anyways, no matter if systemd had a workaround or not. So the UI
part is already covered more or less.

I don't think that incorporating rootflags into "btrfs ready" decision
is going to work. And as I understand, using device-timeout will just
turn out as a missing rootfs after timeout and the degraded fs won't be
marked as ready by it. So btrfs maybe needs a special timeout handling
for "btrfs ready", as I wrote in the other post of this thread.

-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] How to set a limit for mounting roofs?

2016-07-06 Thread Kai Krakow
Am Wed, 6 Jul 2016 06:26:03 +0200
schrieb Lennart Poettering :

> On Tue, 05.07.16 14:00, Chris Murphy (li...@colorremedies.com) wrote:
> 
> > On Tue, Jul 5, 2016 at 12:45 PM, Chris Murphy
> >  wrote:  
> > > OK it must be this.
> > >
> > > :/# cat /usr/lib/udev/rules.d/64-btrfs.rules
> > > # do not edit this file, it will be overwritten on update
> > >
> > > SUBSYSTEM!="block", GOTO="btrfs_end"
> > > ACTION=="remove", GOTO="btrfs_end"
> > > ENV{ID_FS_TYPE}!="btrfs", GOTO="btrfs_end"
> > >
> > > # let the kernel know about this btrfs filesystem, and check if
> > > it is complete IMPORT{builtin}="btrfs ready $devnode"
> > >
> > > # mark the device as not ready to be used by the system
> > > ENV{ID_BTRFS_READY}=="0", ENV{SYSTEMD_READY}="0"
> > >
> > > LABEL="btrfs_end"  
> > 
> > Yep.
> > https://lists.freedesktop.org/archives/systemd-commits/2012-September/002503.html
> > 
> > The problem is that with rootflags=degraded it still indefinitely
> > hangs. And even without the degraded option, I don't think the
> > indefinite hang waiting for missing devices is the best way to find
> > out there's been device failures. I think it's better to fail to
> > mount, and end up at a dracut shell.  
> 
> I figure it would be OK to merge a patch that makes the udev rules
> above set SYSTEMD_READY immediately if the device popped up in case
> some new kernel command line option is set.
> 
> Hooking up rootflags=degraded with this is unlikely to work I fear, as
> by the time the udev rules run we have no idea yet what systemd wants
> to make from the device in the end. That means knowing this early the
> fact that system wants to mount it as root disk with some specific
> mount options is not really sensible in the design...

A possible solution could be to fall back to simply announce
SYSTEMD_READY=1 after a sensible timeout instead of waiting forever for
a situation that is unlikely to happen. That way, the system would boot
slowly in the degraded case but it would continue to boot. If something
goes wrong now, it would probably fall back to rescue shell, right?

-- 
Regards,
Kai

Replies to list-only preferred.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] How to set a limit for mounting roofs?

2016-07-06 Thread Chris Murphy
On Wed, Jul 6, 2016 at 3:36 AM, Andrei Borzenkov  wrote:
> On Wed, Jul 6, 2016 at 7:26 AM, Lennart Poettering
>  wrote:

>> I figure it would be OK to merge a patch that makes the udev rules
>> above set SYSTEMD_READY immediately if the device popped up in case
>> some new kernel command line option is set.
>>
>
> That won't work. This will make it impossible to mount any btrfs that
> needs more than 1 device to actually be mountable (even degraded).
> Because then it will announce btrfs as soon as any device is seen and
> filesystem will be incomplete and won't mount. And we do not retry
> later.
>
> The situation is the same as we had with Linux MD assembly. What is required, 
> is
>
> a) we need a way to query btrfs whether it is mountable (may be degraded)

BTRFS_IOC_DEVICES_READY, might lack sufficient granularity. There are
more than two states for redundant multiple device volumes; but only
two states for either one device volumes or multiple device volumes
lacking redundancy (raid0 and single profiles).

Basically this ioctl is not telling the complete truth when a
redundant volume is missing just one device (or two for raid6), it
says devices not ready. The complete truth is devices minimally ready,
i.e. a mount sould succeed if rootflags=degraded.



> b) we need some way to define external policy whether we want to mount
> degraded btrfs or not. In general case, not just special case of root
> filesystem
> c) we need some way to wait for more devices to appear before we
> attempt degraded mount
> d) finally we need some way to actually perform degraded mount when we
> decide to do it

Yep. Even in the md and lvm cases, if they were to indicate minimum
devices available for mount, systemd could have a timer to delay mount
attempt for most use cases while other use case would change to
immediately (or never) try degraded mount.

I think systemd needs more information to do the right thing, and that
right information can only come from code responsible for making that
determination. For Btrfs anyway, it's kernel code. I'm not sure where
this would go for mdadm and lvm arrays.



> As far as I understand btrfs must be mounted with special option (-o
> degraded), so this can be used as policy decision.

That's true also. However it's still possible that at the time systemd
goes to mount that the minimum number of devices for successful
degraded mount aren't yet ready. *shrug*




-- 
Chris Murphy
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] How to set a limit for mounting roofs?

2016-07-06 Thread Andrei Borzenkov
On Wed, Jul 6, 2016 at 7:26 AM, Lennart Poettering
 wrote:
> On Tue, 05.07.16 14:00, Chris Murphy (li...@colorremedies.com) wrote:
>
>> On Tue, Jul 5, 2016 at 12:45 PM, Chris Murphy  
>> wrote:
>> > OK it must be this.
>> >
>> > :/# cat /usr/lib/udev/rules.d/64-btrfs.rules
>> > # do not edit this file, it will be overwritten on update
>> >
>> > SUBSYSTEM!="block", GOTO="btrfs_end"
>> > ACTION=="remove", GOTO="btrfs_end"
>> > ENV{ID_FS_TYPE}!="btrfs", GOTO="btrfs_end"
>> >
>> > # let the kernel know about this btrfs filesystem, and check if it is 
>> > complete
>> > IMPORT{builtin}="btrfs ready $devnode"
>> >
>> > # mark the device as not ready to be used by the system
>> > ENV{ID_BTRFS_READY}=="0", ENV{SYSTEMD_READY}="0"
>> >
>> > LABEL="btrfs_end"
>>
>> Yep.
>> https://lists.freedesktop.org/archives/systemd-commits/2012-September/002503.html
>>
>> The problem is that with rootflags=degraded it still indefinitely
>> hangs. And even without the degraded option, I don't think the
>> indefinite hang waiting for missing devices is the best way to find
>> out there's been device failures. I think it's better to fail to
>> mount, and end up at a dracut shell.
>
> I figure it would be OK to merge a patch that makes the udev rules
> above set SYSTEMD_READY immediately if the device popped up in case
> some new kernel command line option is set.
>

That won't work. This will make it impossible to mount any btrfs that
needs more than 1 device to actually be mountable (even degraded).
Because then it will announce btrfs as soon as any device is seen and
filesystem will be incomplete and won't mount. And we do not retry
later.

The situation is the same as we had with Linux MD assembly. What is required, is

a) we need a way to query btrfs whether it is mountable (may be degraded)
b) we need some way to define external policy whether we want to mount
degraded btrfs or not. In general case, not just special case of root
filesystem
c) we need some way to wait for more devices to appear before we
attempt degraded mount
d) finally we need some way to actually perform degraded mount when we
decide to do it

This cannot be implemented using current unit dependencies at all. The
only implementation that could be squeezed into existing framework is
separate program that listens to udev events and waits for all devices
to be present. btrfs mount units must then depend on this program.
Then mount unit will depend on this program and wait for it to
complete; successful completion means filesystem can be mounted.

As far as I understand btrfs must be mounted with special option (-o
degraded), so this can be used as policy decision.

This will also make existing udev rules obsolete (and we finally stop
lying about devices availability).

> Hooking up rootflags=degraded with this is unlikely to work I fear, as
> by the time the udev rules run we have no idea yet what systemd wants
> to make from the device in the end. That means knowing this early the
> fact that system wants to mount it as root disk with some specific
> mount options is not really sensible in the design...
>

This fits well in my suggestion if we use "degraded" in fs flags as
indicator that we are allowed to mount filesystem in degraded mode.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] How to set a limit for mounting roofs?

2016-07-05 Thread Lennart Poettering
On Mon, 04.07.16 12:32, Chris Murphy (li...@colorremedies.com) wrote:

> I have a system where I get an indefinite
> 
> "A start job is running for dev-vda2.device (xmin ys / no limit)"
> 
> Is there a boot parameter to use to change the no limit to have a
> limit? rd.timeout does nothing. When I use rd.break=pre-mount and use
> blkid, /dev/vda2 is there and can be mounted with the same mount
> options as specified with rootflags= on the command line. So I'm not
> sure why it's hanging indefinitely. I figure with
> systemd.log_level=debug and rd.debug and maybe rd.udev.debug I should
> be able to figure out why this is failing to mount.

It should be possible to use x-systemd.device-timeout= among the mount
options in rootflags= to alter the timeout for the device.

But what precisely do you expect to happen in this case? You'd simply
choose between "waits forever" and "fails after some time"... A
missing root device is hardly something you can just ignore and
proceed...

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] How to set a limit for mounting roofs?

2016-07-05 Thread Lennart Poettering
On Tue, 05.07.16 14:00, Chris Murphy (li...@colorremedies.com) wrote:

> On Tue, Jul 5, 2016 at 12:45 PM, Chris Murphy  wrote:
> > OK it must be this.
> >
> > :/# cat /usr/lib/udev/rules.d/64-btrfs.rules
> > # do not edit this file, it will be overwritten on update
> >
> > SUBSYSTEM!="block", GOTO="btrfs_end"
> > ACTION=="remove", GOTO="btrfs_end"
> > ENV{ID_FS_TYPE}!="btrfs", GOTO="btrfs_end"
> >
> > # let the kernel know about this btrfs filesystem, and check if it is 
> > complete
> > IMPORT{builtin}="btrfs ready $devnode"
> >
> > # mark the device as not ready to be used by the system
> > ENV{ID_BTRFS_READY}=="0", ENV{SYSTEMD_READY}="0"
> >
> > LABEL="btrfs_end"
> 
> Yep.
> https://lists.freedesktop.org/archives/systemd-commits/2012-September/002503.html
> 
> The problem is that with rootflags=degraded it still indefinitely
> hangs. And even without the degraded option, I don't think the
> indefinite hang waiting for missing devices is the best way to find
> out there's been device failures. I think it's better to fail to
> mount, and end up at a dracut shell.

I figure it would be OK to merge a patch that makes the udev rules
above set SYSTEMD_READY immediately if the device popped up in case
some new kernel command line option is set.

Hooking up rootflags=degraded with this is unlikely to work I fear, as
by the time the udev rules run we have no idea yet what systemd wants
to make from the device in the end. That means knowing this early the
fact that system wants to mount it as root disk with some specific
mount options is not really sensible in the design...

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] How to set a limit for mounting roofs?

2016-07-05 Thread Chris Murphy
On Tue, Jul 5, 2016 at 12:45 PM, Chris Murphy  wrote:
> OK it must be this.
>
> :/# cat /usr/lib/udev/rules.d/64-btrfs.rules
> # do not edit this file, it will be overwritten on update
>
> SUBSYSTEM!="block", GOTO="btrfs_end"
> ACTION=="remove", GOTO="btrfs_end"
> ENV{ID_FS_TYPE}!="btrfs", GOTO="btrfs_end"
>
> # let the kernel know about this btrfs filesystem, and check if it is complete
> IMPORT{builtin}="btrfs ready $devnode"
>
> # mark the device as not ready to be used by the system
> ENV{ID_BTRFS_READY}=="0", ENV{SYSTEMD_READY}="0"
>
> LABEL="btrfs_end"

Yep.
https://lists.freedesktop.org/archives/systemd-commits/2012-September/002503.html

The problem is that with rootflags=degraded it still indefinitely
hangs. And even without the degraded option, I don't think the
indefinite hang waiting for missing devices is the best way to find
out there's been device failures. I think it's better to fail to
mount, and end up at a dracut shell.

-- 
Chris Murphy
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] How to set a limit for mounting roofs?

2016-07-05 Thread Chris Murphy
OK it must be this.

:/# cat /usr/lib/udev/rules.d/64-btrfs.rules
# do not edit this file, it will be overwritten on update

SUBSYSTEM!="block", GOTO="btrfs_end"
ACTION=="remove", GOTO="btrfs_end"
ENV{ID_FS_TYPE}!="btrfs", GOTO="btrfs_end"

# let the kernel know about this btrfs filesystem, and check if it is complete
IMPORT{builtin}="btrfs ready $devnode"

# mark the device as not ready to be used by the system
ENV{ID_BTRFS_READY}=="0", ENV{SYSTEMD_READY}="0"

LABEL="btrfs_end"





-- 
Chris Murphy
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] How to set a limit for mounting roofs?

2016-07-05 Thread Chris Murphy
On Tue, Jul 5, 2016 at 10:53 AM, Chris Murphy  wrote:

> It seems like this dracut doesn't understand rd.timeout.

OK now on Fedora Rawhide and rd.timeout=30 appears to work. The
failure is the same, systemd is waiting for /dev/vda2 for an unknown
reason, it doesn't even attempt to mount it so this is not (yet) a
mount failure.

And journalctl is more revealing...but I'm still lost.
https://drive.google.com/open?id=0B_2Asp8DGjJ9dE1MOWlXbjYyeVE

[4.510767] localhost.localdomain systemd[1]: dev-vda1.device:
Changed dead -> plugged
[4.510995] localhost.localdomain systemd[1]:
sys-devices-pci:00-:00:09.0-virtio2-block-vda-vda1.device:
Changed dead -> plugged

That never happens for vda2.

[4.602049] localhost.localdomain systemd-udevd[205]: worker [226] exited
[   63.212453] localhost.localdomain systemd[1]: dev-vda2.device: Job
dev-vda2.device/start timed out.
[   63.213227] localhost.localdomain systemd[1]: dev-vda2.device: Job
dev-vda2.device/start finished, result=timeout
[   63.213904] localhost.localdomain systemd[1]: Timed out waiting for
device dev-vda2.device.

It goes direct from udevd stuff, long wait with nothing happening,
then time out. I wonder if this is a problem in 64-btrfs.rules just
because there is a missing device. I thought root=/dev/vda2 would work
around the problem where btrfs udev rules does not instantiate the
Btrfs volume UUID if there is a missing device; and now I wonder if
this has been misunderstood all along.

In the shell

:/# blkid
/dev/sr0: UUID="2016-07-04-07-03-49-00"
LABEL="Fedora-S-dvd-x86_64-rawh" TYPE="iso9660" PTUUID="6c9a3e8e"
PTTYPE="dos"
/dev/vda1: UUID="b061db37-3ef5-4b5c-aef0-bbaa360f7788" TYPE="ext4"
PARTUUID="6cd85e0f-01"
/dev/vda2: LABEL="fedora" UUID="488791ba-5796-4653-a3da-62c36fc2067b"
UUID_SUB="61d57d89-4299-4d84-8e78-c3833ac297f3" TYPE="btrfs"
PARTUUID="6cd85e0f-02"
:/#

blkid very clearly sees both vda2 and the volume UUID. I can manually
mount this volume from this same shell but somehow between systemd and
udev, they can't figure it out because they're waiting for something
to happen before even trying to mount. What are they waiting for, the
missing device to show up? Maybe that's the problem, degraded boot of
Btrfs is simply not supported at all?



-- 
Chris Murphy
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] How to set a limit for mounting roofs?

2016-07-05 Thread Chris Murphy
New console log.
Command line: BOOT_IMAGE=/vmlinuz-3.10.0-327.22.2.el7.x86_64
root=/dev/vda2 ro rootflags=subvol=root,degraded vconsole.keymap=us
crashkernel=auto vconsole.font=latarcyrheb-sun16 LANG=en_US.UTF-8
rd.timeout=30 rd.debug rd.udev.debug systemd.log_level=debug
systemd.log_target=console console=ttyS0,38400

It seems like this dracut doesn't understand rd.timeout. And even with
rd.shell it doesn't drop to a shell. I don't see any attempt to even
mount vda2 so I don't know what the job is waiting for. If the mount
were to explicitly fail not finding vda2 or the root subvolume or
unable to mount degraded, that presumably would trigger rd.shell. But
that doesn't happen, just waiting.

https://drive.google.com/open?id=0B_2Asp8DGjJ9WW9FMzIwNGJQSGc


Chris Murphy
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] How to set a limit for mounting roofs?

2016-07-05 Thread Chris Murphy
pre-mount:/# cat /run/systemd/generator/dev-vda2.device.d/timeout.conf
[Unit]
JobTimeoutSec=0


This is virsh console output during boot:
https://drive.google.com/open?id=0B_2Asp8DGjJ9RlN1cGJTTEtHcjg

Here is journalctl -b -o short-monotonic output from the pre-mount
shell using Command line:
BOOT_IMAGE=/vmlinuz-3.10.0-327.22.2.el7.x86_64 root=/dev/vda2 ro
rootflags=subvol=root,degraded vconsole.keymap=us crashkernel=auto
vconsole.font=latarcyrheb-sun16 rhgb quiet LANG=en_US.UTF-8
systemd.log_level=debug rd.udev.debug rd.debug
systemd.log_target=console console=ttyS0,38400 rd.break=pre-mount

https://drive.google.com/open?id=0B_2Asp8DGjJ9OThvS2Q3X3pWT3c


What I'm getting from the journal is...

[2.939557] localhost.localdomain kernel: virtio-pci :00:05.0:
irq 27 for MSI/MSI-X
[2.939595] localhost.localdomain kernel: virtio-pci :00:05.0:
irq 28 for MSI/MSI-X
[2.939849] localhost.localdomain kernel:  vda: vda1 vda2

OK so the kernel sees the drive and the partitions.

[3.043274] localhost.localdomain kernel: Btrfs loaded
[3.043836] localhost.localdomain kernel: BTRFS: device label
centos devid 1 transid 55 /dev/vda2

It sees the file system on /dev/vda2 (raid1, missing device vdb1)

But then I hit pre-mount so I don't actually find out why it won't
mount. If there were a problem with degraded mounts, the mount should
explicitly fail, and then systemd would drop to a shell if rd.shell is
used. But it doesn't. It just hangs. So the mount isn't hard failing,
it's like some kind of soft fail where systemd just waits for some
reason.


Chris Murphy
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] How to set a limit for mounting roofs?

2016-07-05 Thread Chris Murphy
On Mon, Jul 4, 2016 at 9:24 PM, Andrei Borzenkov  wrote:
> 04.07.2016 21:32, Chris Murphy пишет:
>> I have a system where I get an indefinite
>>
>> "A start job is running for dev-vda2.device (xmin ys / no limit)"
>>
>> Is there a boot parameter to use to change the no limit to have a
>> limit? rd.timeout does nothing.
>
> It should. Use rd.break and examine unit configuration (systemctl show
> dev-vda2.device) as well as /run/systemd/system/dev-vda1.device.d.
> dracut is using infinite timeout by default indeed.

pre-mount:/# systemctl status dev-vda2.device
Got unexpected auxiliary data with level=1 and type=2
Accepted new private connection.
Got unexpected auxiliary data with level=1 and type=2
Got unexpected auxiliary data with level=1 and type=2
Got message type=method_call sender=n/a
destination=org.freedesktop.systemd1
object=/org/freedesktop/systemd1/unit/dev_2dvda2_2edevice
interface=org.freedesktop.DBus.Properties member=GetAll cookie=1
reply_cookie=0 error=n/a
Sent message type=method_return sender=n/a destination=n/a object=n/a
interface=n/a member=n/a cookie=1 reply_cookie=1 error=n/a
● dev-vda2.device
   Loaded: loaded
  Drop-In: /run/systemd/generator/dev-vda2.device.d
   └─timeout.conf
   Active: inactive (dead)
Got disconnect on private connection.

pre-mount:/# ls -la /run/systemd/system/
total 0
drwxr-xr-x 2 root 0  40 Jul  5 15:36 .
drwxr-xr-x 7 root 0 180 Jul  5 15:36 ..


This is CentOS 7.2.
pre-mount:/# systemctl --version
systemd 219
+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP
+LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ -LZ4 -SECCOMP +BLKID +ELFUTILS
+KMOD +IDN

Once I get a better idea what's going on here, I'll check it with
Fedora Rawhide so any bugs/rfe's are based on stuff closer to upstream
versions.


-- 
Chris Murphy
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] How to set a limit for mounting roofs?

2016-07-04 Thread Andrei Borzenkov
04.07.2016 21:32, Chris Murphy пишет:
> I have a system where I get an indefinite
> 
> "A start job is running for dev-vda2.device (xmin ys / no limit)"
> 
> Is there a boot parameter to use to change the no limit to have a
> limit? rd.timeout does nothing.

It should. Use rd.break and examine unit configuration (systemctl show
dev-vda2.device) as well as /run/systemd/system/dev-vda1.device.d.
dracut is using infinite timeout by default indeed.

> When I use rd.break=pre-mount and use
> blkid, /dev/vda2 is there and can be mounted with the same mount
> options as specified with rootflags= on the command line. So I'm not
> sure why it's hanging indefinitely. I figure with
> systemd.log_level=debug and rd.debug and maybe rd.udev.debug I should
> be able to figure out why this is failing to mount.
> 

In all cases I can remember this was caused by lack of CONFIG_FHANDLE
support in kernel.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] How to set a limit for mounting roofs?

2016-07-04 Thread Chris Murphy
I have a system where I get an indefinite

"A start job is running for dev-vda2.device (xmin ys / no limit)"

Is there a boot parameter to use to change the no limit to have a
limit? rd.timeout does nothing. When I use rd.break=pre-mount and use
blkid, /dev/vda2 is there and can be mounted with the same mount
options as specified with rootflags= on the command line. So I'm not
sure why it's hanging indefinitely. I figure with
systemd.log_level=debug and rd.debug and maybe rd.udev.debug I should
be able to figure out why this is failing to mount.



-- 
Chris Murphy
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel