On 08/15/2012 11:01 AM, C Anthony Risinger wrote:
On Wed, Aug 15, 2012 at 6:38 AM, Baho Utot <baho-u...@columbus.rr.com> wrote:
On 08/14/2012 08:53 PM, Oon-Ee Ng wrote:
On Wed, Aug 15, 2012 at 8:13 AM, Tom Gundersen <t...@jklm.no> wrote:
On Wed, Aug 15, 2012 at 1:55 AM, David Benfell
<benf...@parts-unknown.org> wrote:
Does systemd not use the standard
mount program and follow /etc/fstab?
It does. Though it does not use "mount -a", but rather mounts each fs
separately.

[putolin]

I came across another anomaly on my systemd boxes that I would like someone
to verify if they could.  Please do this on a backup system.

I was changing some lvm partitions about that were mounted in /etc/fstab,
actually I removed them and created two new lvm partitions with different
names, but failed to update the fstab. Upon rebooting the systems failed to
boot and where stuck at trying to mount the non existing lvm partitions.  I
could not fix the systems as I could not get a "recovery" bash prompt.  I
had to use a boot live CD to edit the fstab and then all was well. On all my
sysvinit systems a bad mount point would just give me an error and continue
booting.

Could some brave enterprising soul confirm this?

This created the following question: Can systemd boot a system without a
fstab?
you would have to provide the mountpoints -- depending on what you
were mounting i'm quite sure initscripts would fail (/usr? /var? what
was changed??), though they may very well just keep chugging on,
pretending all is well.

root mount depends on nothing more than what's listed on the kernel
cmdline in grub.cfg or equivalent.  you could have also added
`break=y` (legacy form, i forget the new syntax) to open a shell in
the initramfs and correct from there.

AFAIK systemd doesn't NEED an fstab, but you would then need to
provide native *.mount files instead ... SOMETHING has to tell it
where the mounts go, yes?


I don't know what your pointing out here


What I had was /dev/lvm/lfs and /dev/lvm/LFS in the fstab. These where mounted into /mnt/lfs and /mnt/lfs/LFS

I removed those from lvm and created /dev/lvm/wip and /dev/lvm/WIP and I did not remove the /dev/lvm/lfs and /dev/lvm/LFS from the fstab file, then rebooted.

As far as I could tell systemd rolled over because it could not mount the lfs and LFS lvm partitions, because they where not there. It just hung waiting for mount points that just wasn't going to showup no matter what. I could not get a "maintenence prompt" it was just stuck at trying to mount the non-existent lvm partitions.

My sysvinit systems simply spit out an error "can mount what ever blah blah blah" and continued to boot. Of course those points were not mounted by the system did boot fully.

As for booting without an fstab I do that alot on my custom "rescue" usb thumb drives as they do not have a fstab file at all. I use not *.mount files at all and the system works just fine....the kernel knows where its root file system is. Try removing/moving the fstab from a test system. It will boot and run fine, of course you will lose swap and any other such things but if you have everything on one partition your good.

Reply via email to