Re: [systemd-devel] "bootctl install" on mdadm raid 1 fails

2018-01-23 Thread Bjørn Forsman
Hi Lennart,

On 23 January 2018 at 20:22, Lennart Poettering  wrote:
> On Sa, 23.12.17 17:46, Bjørn Forsman (bjorn.fors...@gmail.com) wrote:
>> Would bootctl patches be considered for inclusion?
> Doing what precisely?

Whatever is needed to support "bootctl install" on mdadm raid 1. What
that is, precisely, I don't know yet.

The idea of ESP on mdadm raid 1 was not received well in this thread.
So I figure I'd check if there is any possibility of such a feature to
be merged upstream before I spend any time on trying to implement it.

Best regards,
Bjørn Forsman
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] "bootctl install" on mdadm raid 1 fails

2017-12-23 Thread Bjørn Forsman
On 14 December 2017 at 12:22, Lennart Poettering  wrote:
> On Do, 14.12.17 22:17, Michael Chapman (m...@very.puzzling.org) wrote:
>> It's perhaps unlikely for firmware itself to write to the ESP, but certainly
>> anything launched from the firmware can. One of my boot entries is an EFI
>> shell, and it can move, copy, read and write files within the ESP.
>>
>> I think it's probably wise to avoid software RAID for the ESP.
>
> I think so too. There has been work to teach sd-boot "boot attempt
> counting", to make chrome-os-like automatic upgrading with safe
> fallback when the system continously fails to boot available. That too
> would store the counts in the file system.

Ok, there are things to look out for, but I don't think it's an
unreasonable setup. I want protection against disk crash and HW raid
is not available. What better option is there? (I've never had
firmware write to my boot disk / ESP (at least to my knowledge), so I
consider the risk of firmware messing up the SW raid to be very
small.)

Would bootctl patches be considered for inclusion?

Best regards,
Bjørn Forsman
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] "bootctl install" on mdadm raid 1 fails

2017-12-10 Thread Bjørn Forsman
On 9 December 2017 at 06:56, Andrei Borzenkov  wrote:
> [...]
> Firmware is unaware of MD RAID and each partition is individually and
> independently writable by firmware.

1. "Firmware is unaware of MD RAID". I agree.
2. "... independently writable by firmware". I don't expect firmware
to _write_ to the ESP (does it?!). As long as it only reads, nothing
will get out of sync.

> Pretending that you can mirror them
> on OS level is simply wrong.

I think that statement is correct for all md raid setups _except_
read-only access to raid 1 and with metadata 0.90 or 1.0 (superblock
at the end of device). Because in that case, a filesystem written on
the md array aligns with the underlying block device. So when the
system boots, EFI firmware can read /dev/sda1 and see the very same
filesystem that the OS put on /dev/md127.

Having ESP on an mdadm raid 1 array really works. (I now have a setup
of this myself.) But due to

  $ bootctl --path=/mnt/boot install
  Failed to probe partition scheme "/mnt/boot": Input/output error

, which my OS installer runs, it requires jumping through a few hoops
to get it running.

The hoops are:

1. Install OS with /dev/sda1 on /boot (no raid).
2. Setup /dev/md127 raid 1 on /dev/sdb1 with the 2nd device missing.
(May have to copy filesystem uuid from /dev/sda1 to /dev/md127.)
3. rsync filesystem contents from /dev/sda1 to /dev/md127.
5. Repurpose /dev/sda1 as the missing device in the /dev/md127 array
6. Use efibootmgr to create the 2nd boot entry, for /dev/sdb1.

I think these steps could be simplified/eliminated if "bootctl"
learned about mdadm (level 1) arrays.

Best regards,
Bjørn Forsman
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] "bootctl install" on mdadm raid 1 fails

2017-12-08 Thread Bjørn Forsman
Hi all,

I assumed bootctl would be able to install onto a mdadm raid 1 array
(mirror). But this happens:

  $ bootctl --path=/mnt/boot install
  Failed to probe partition scheme "/mnt/boot": Input/output error

The raid array is created with --metadata=0.90 (superblock at the end
of device). systemd v234 was used for testing.

I see people online that have worked around this by setting up the ESP
(/boot) manually, and finalizing the install with 2x calls to
efibootmgr. But I'm hoping for bootctl to handle this for me :-)

Any ideas?

Best regards,
Bjørn Forsman
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] automount unit that never fails?

2016-11-04 Thread Bjørn Forsman
Hi Lennart,

On 3 November 2016 at 20:19, Lennart Poettering  wrote:
> Your mail does not say in any way what precisely your issue is?

Did you read the first post? I hope not, because I don't really know
how to describe it more precisely than that :-)

Below is a copy of the first post.

When a mount unit fails (repeatedly), it takes the corresponding
automount unit down with it. To me this breaks a very nice property
I'd like to have:

  A mountpoint should EITHER return the mounted filesystem OR return an error.

As it is now, when the automount unit has failed, programs accessing
the mountpoint will not receive any errors and instead silently access
the local filesystem. That's bad!

I don't consider using mountpoint(1) or "test
mountpoint/IF_YOU_SEE_THIS_ITS_NOT_MOUNTED" proper solutions, because
they are out-of-band.

I was thinking of adding Restart=always to the automount unit, but
that still leaves a small timeframe where autofs is not active. So
that's not ideal either. Also, using Restart= implies a proper .mount
unit instead of /etc/fstab, but GVFS continuously activates autofs
mounts unless the option "x-gvfs-hide" is in /etc/fstab. So I'm kind
of stuck with /etc/fstab until that GVFS issue is solved.

So the question is, what is the reason for the mount unit to take down
the automount? I figure the automount should simply never fail.

Thoughs?

(I'm running NixOS 16.09 with systemd 231, trying to setup robust,
lazy sshfs mount.)

Best regards,
Bjørn Forsman
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] automount unit that never fails?

2016-10-29 Thread Bjørn Forsman
On 1 October 2016 at 17:14, Bjørn Forsman  wrote:
> On 20 September 2016 at 09:08, Bjørn Forsman  wrote:
>> I have a question/issue with the behaviour of (auto)mount units.
>> [...]
>
> Bump.

Bump again. Anyone?

If systemd + automount isn't the solution to automatically (and
robustly) mount remote filesystems (in my case sshfs), what is?

Best regards,
Bjørn Forsman
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] automount unit that never fails?

2016-10-01 Thread Bjørn Forsman
On 20 September 2016 at 09:08, Bjørn Forsman  wrote:
> I have a question/issue with the behaviour of (auto)mount units.
> [...]

Bump.

Best regards,
Bjørn Forsman
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] automount unit that never fails?

2016-09-20 Thread Bjørn Forsman
Hi systemd developers,

My name is Bjørn Forsman and this is my first post to this list. I
have a question/issue with the behaviour of (auto)mount units.

When a mount unit fails (repeatedly), it takes the corresponding
automount unit down with it. To me this breaks a very nice property
I'd like to have:

  A mountpoint should EITHER return the mounted filesystem OR return an error.

As it is now, when the automount unit has failed, programs accessing
the mountpoint will not receive any errors and instead silently access
the local filesystem. That's bad!

I don't consider using mountpoint(1) or "test
mountpoint/IF_YOU_SEE_THIS_ITS_NOT_MOUNTED" proper solutions, because
they are out-of-band.

I was thinking of adding Restart=always to the automount unit, but
that still leaves a small timeframe where autofs is not active. So
that's not ideal either. Also, using Restart= implies a proper .mount
unit instead of /etc/fstab, but GVFS continuously activates autofs
mounts unless the option "x-gvfs-hide" is in /etc/fstab. So I'm kind
of stuck with /etc/fstab until that GVFS issue is solved.

So the question is, what is the reason for the mount unit to take down
the automount? I figure the automount should simply never fail.

Thoughs?

(I'm running NixOS 16.09 with systemd 231, trying to setup robust,
lazy sshfs mount.)

Best regards,
Bjørn Forsman
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel