Re: BTRFS partitioning scheme (was BTRFS with RAID1 cannot boot when removing drive)

2014-02-21 Thread Frank Kingswood

On 13/02/14 18:02, Austin S Hemmelgarn wrote:

On 2014-02-13 12:33, Chris Murphy wrote:


On Feb 13, 2014, at 1:50 AM, Frank Kingswood fr...@kingswood-consulting.co.uk 
wrote:


On 12/02/14 17:13, Saint Germain wrote:

Ok based on your advices, here is what I have done so far to use UEFI
(remeber that the objective is to have a clean and simple BTRFS RAID1
install).

A) I start first with only one drive, I have gone with the following
partition scheme (Debian wheezy, kernel 3.12, grub 2.00, GPT partition
with parted):
sda1 = 1MiB BIOS Boot partition (no FS, set 1 bios_grub on with
parted to set the type)
sda2 = 550 MiB EFI System Partition (FAT32, toggle 2 boot with
parted to set the type),  mounted on /boot/efi


I'm curious, why so big? There's only one file of about 100kb there, and I was 
considering shrinking mine to the minimum possible (which seems to be about 33 
MB).


I'm not sure what OS loader you're using but I haven't seen a grubx64.efi less 
than ~500KB. In general I'm seeing it at about 1MB. The Fedora grub-efi and 
shim packages as installed on the ESP take up 10MB. So 33MiB is a bit small, 
and if we were more conservative, we'd update the OS loader by writing the new 
one to a temp directory rather than overwriting existing. And then remove the 
old, and rename the new.

The UEFI spec says if the system partition is FAT, it should be FAT32. For 
removable media it's FAT12/FAT16. I don't know what tool the various distro 
installers are using, but at least on Fedora they are using mkdosfs, part of 
dosfstools. And its cutoff for making FAT16/FAT32 based on media size is 500MB 
unless otherwise specified, and the installer doesn't specify so actually by 
default Fedora system partitions are FAT16, to no obvious ill effect. But if 
you want a FAT32 ESP created by the installer the ESP needs to be 500MiB or 
525MB. So 550MB is a reasonable number to make that happen.


For the record, this works for me on a Dell laptop:

I have resized my EFI partition to 40 MB, reformatted it with
   mkfs.vfat -F 32 -n EFI /dev/sda1
and copied back the grubx64.efi file.

Note that mkvs.vfat gives WARNING: Not enough clusters for a 32 bit 
FAT! unless the file system is more than 32 MiB.


Because the GUID in the GPT partition table is modified by the 
repartitioning the BIOS must be updated to point to this file. That can 
be done with efibootmgr, or in my case it is also possible in the BIOS 
setup.


Frank


--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: BTRFS with RAID1 cannot boot when removing drive

2014-02-16 Thread Saint Germain
On Fri, 14 Feb 2014 15:33:10 +0100, Saint Germain saint...@gmail.com
wrote :

 On 11 February 2014 03:30, Saint Germain saint...@gmail.com wrote:
   I am experimenting with BTRFS and RAID1 on my Debian Wheezy (with
   backported kernel 3.12-0.bpo.1-amd64) using a a motherboard with
   UEFI.
 
   I have installed Debian with the following partition on the first
   hard drive (no BTRFS subsystem):
   /dev/sda1: for / (BTRFS)
   /dev/sda2: for /home (BTRFS)
   /dev/sda3: for swap
  
   Then I added another drive for a RAID1 configuration (with btrfs
   balance) and I installed grub on the second hard drive with
   grub-install /dev/sdb.
 
  You should be able to mount a two-device btrfs raid1 filesystem
  with only a single device with the degraded mount option, tho I
  believe current kernels refuse a read-write mount in that case, so
  you'll have read-only access until you btrfs device add a second
  device, so it can do normal raid1 mode once again.
 
  Meanwhile, I don't believe it's on the wiki, but it's worth noting
  my experience with btrfs raid1 mode in my pre-deployment tests.
  Actually, with the (I believe) mandatory read-only mount if raid1
  is degraded below two devices, this problem's going to be harder
  to run into than it was in my testing several kernels ago, but
  here's what I found:
 
  But as I said, if btrfs only allows read-only mounts of filesystems
  without enough devices to properly complete the raidlevel, that
  shouldn't be as big an issue these days, since it should be more
  difficult or impossible to get the two devices separately mounted
  writable in the first place, with the consequence that the
  differing copies issue will be difficult or impossible to trigger
  in the first place. =:^)
 
 
 Hello,
 
 With your advices and Chris ones, I have now a (clean ?) partition to
 start experimenting with RAID1 (and which boot correctly in UEFI
 mode):
 sda1 = BIOS Boot partition
 sda2 = EFI System Partition
 sda3 = BTFS partition
 sda4 = swap partition
 For the moment I haven't created subvolumes (for / and for /home
 for instance) to keep things simple.
 
 The idea is then to create a RAID1 with a sdb drive (duplicate sda
 partitioning, add/balance/convert sdb3 + grub-install on sdb, add sdb
 swap UUID in /etc/fstab), shutdown and remove sda to check the
 procedure to replace it.
 
 I read the last thread on the subject lost with degraded RAID1, but
 would like to really confirm what would be the current approved
 procedure and if it will be valid for future BTRFS version (especially
 about the read-only mount).
 
 So what should I do from there ?
 Here are a few questions:
 
 1) Boot in degraded mode: currently with my kernel
 (3.12-0.bpo.1-amd64, from Debian wheezy-backports) it seems that I can
 mount in read-write mode.
 However for future kernel, it seems that I will be only able to mount
 read-only ? See here:
 http://www.spinics.net/lists/linux-btrfs/msg20164.html
 https://bugzilla.kernel.org/show_bug.cgi?id=60594
 
 2) If I am able to mount read-write, is this the correct procedure:
   a) place a new drive in another physical location sdc (I don't think
 I can use the same sda physical location ?)
   b) boot in degraded mode on sdb
   c) use the 'replace' command to replace sda by sdc
   d) perhaps a 'balance' is necessary ?
 
 3) Can I use also the above procedure if I am only allowed to mount
 read-only ?
 
 4) If I want to use my system without RAID1 support (dangerous I
 know), after booting in degraded mode with read-write, can I convert
 back sdb from RAID1 to RAID0 in a safe way ?
 (btrfs balance start -dconvert=raid0 -mconvert=raid0 /)
 

To continue with this RAID1 recovery procedure (Debian stable with
kernel 3.12-0.bpo.1-amd64), I tried to reproduce Duncan setup and the
result is not good.

Starting with a clean setup of 2 hard drive in RAID1 (sda and sdb) and
a clean snapshot of the rootfs:
1) poweroff, disconnect sda and boot on sdb with rootflags=ro,degraded
2) sdb is mounted ro but automatically remounted read-write by initramf
3) create a file witness1 and modify a file test.txt with 'alpha' inside
4) poweroff, connect sda, disconnect sdb and boot on sda
5) create a file witness2 and modify a file test.txt with 'beta' inside
6) poweroff, connect sdb and boot on sda
7) the modification from step 3 are there (but not from step 5)
8) launch scrub: a lot of errors are detected but no unrepairable errors
9) poweroff, disconnect sdb, boot on sda
10) the modification from step 3 are there (but not from step 5)
11) poweroff, boot on sda: kernel panic on startup
12) reboot, boot is possible
13) launch scrub: a lot of errors and kernel error
14) reboot, error on boot, and same error as step 13 with scrub
15) boot on previous snapshot of step1, same error on boot and
same error as step 13 with scrub.


I hope that it will be useful for someone. It seems that mounting
read-write is really not a good idea (have to find how to force ro with
Debian). The RAID1 

Re: BTRFS with RAID1 cannot boot when removing drive

2014-02-14 Thread Saint Germain
On 11 February 2014 03:30, Saint Germain saint...@gmail.com wrote:
  I am experimenting with BTRFS and RAID1 on my Debian Wheezy (with
  backported kernel 3.12-0.bpo.1-amd64) using a a motherboard with
  UEFI.

  I have installed Debian with the following partition on the first
  hard drive (no BTRFS subsystem):
  /dev/sda1: for / (BTRFS)
  /dev/sda2: for /home (BTRFS)
  /dev/sda3: for swap
 
  Then I added another drive for a RAID1 configuration (with btrfs
  balance) and I installed grub on the second hard drive with
  grub-install /dev/sdb.

 You should be able to mount a two-device btrfs raid1 filesystem with
 only a single device with the degraded mount option, tho I believe
 current kernels refuse a read-write mount in that case, so you'll
 have read-only access until you btrfs device add a second device, so
 it can do normal raid1 mode once again.

 Meanwhile, I don't believe it's on the wiki, but it's worth noting my
 experience with btrfs raid1 mode in my pre-deployment tests.
 Actually, with the (I believe) mandatory read-only mount if raid1 is
 degraded below two devices, this problem's going to be harder to run
 into than it was in my testing several kernels ago, but here's what I
 found:

 But as I said, if btrfs only allows read-only mounts of filesystems
 without enough devices to properly complete the raidlevel, that
 shouldn't be as big an issue these days, since it should be more
 difficult or impossible to get the two devices separately mounted
 writable in the first place, with the consequence that the differing
 copies issue will be difficult or impossible to trigger in the first
 place. =:^)


Hello,

With your advices and Chris ones, I have now a (clean ?) partition to
start experimenting with RAID1 (and which boot correctly in UEFI
mode):
sda1 = BIOS Boot partition
sda2 = EFI System Partition
sda3 = BTFS partition
sda4 = swap partition
For the moment I haven't created subvolumes (for / and for /home
for instance) to keep things simple.

The idea is then to create a RAID1 with a sdb drive (duplicate sda
partitioning, add/balance/convert sdb3 + grub-install on sdb, add sdb
swap UUID in /etc/fstab), shutdown and remove sda to check the
procedure to replace it.

I read the last thread on the subject lost with degraded RAID1, but
would like to really confirm what would be the current approved
procedure and if it will be valid for future BTRFS version (especially
about the read-only mount).

So what should I do from there ?
Here are a few questions:

1) Boot in degraded mode: currently with my kernel
(3.12-0.bpo.1-amd64, from Debian wheezy-backports) it seems that I can
mount in read-write mode.
However for future kernel, it seems that I will be only able to mount
read-only ? See here:
http://www.spinics.net/lists/linux-btrfs/msg20164.html
https://bugzilla.kernel.org/show_bug.cgi?id=60594

2) If I am able to mount read-write, is this the correct procedure:
  a) place a new drive in another physical location sdc (I don't think
I can use the same sda physical location ?)
  b) boot in degraded mode on sdb
  c) use the 'replace' command to replace sda by sdc
  d) perhaps a 'balance' is necessary ?

3) Can I use also the above procedure if I am only allowed to mount read-only ?

4) If I want to use my system without RAID1 support (dangerous I
know), after booting in degraded mode with read-write, can I convert
back sdb from RAID1 to RAID0 in a safe way ?
(btrfs balance start -dconvert=raid0 -mconvert=raid0 /)

5) Perhaps a recovery procedure which includes booting on a different
rescue disk would be more appropriate ?

Thanks again,
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: BTRFS partitioning scheme (was BTRFS with RAID1 cannot boot when removing drive)

2014-02-13 Thread Austin S Hemmelgarn
On 2014-02-13 12:33, Chris Murphy wrote:
 
 On Feb 13, 2014, at 1:50 AM, Frank Kingswood 
 fr...@kingswood-consulting.co.uk wrote:
 
 On 12/02/14 17:13, Saint Germain wrote:
 Ok based on your advices, here is what I have done so far to use UEFI
 (remeber that the objective is to have a clean and simple BTRFS RAID1
 install).

 A) I start first with only one drive, I have gone with the following
 partition scheme (Debian wheezy, kernel 3.12, grub 2.00, GPT partition
 with parted):
 sda1 = 1MiB BIOS Boot partition (no FS, set 1 bios_grub on with
 parted to set the type)
 sda2 = 550 MiB EFI System Partition (FAT32, toggle 2 boot with
 parted to set the type),  mounted on /boot/efi

 I'm curious, why so big? There's only one file of about 100kb there, and I 
 was considering shrinking mine to the minimum possible (which seems to be 
 about 33 MB).
 
 I'm not sure what OS loader you're using but I haven't seen a grubx64.efi 
 less than ~500KB. In general I'm seeing it at about 1MB. The Fedora grub-efi 
 and shim packages as installed on the ESP take up 10MB. So 33MiB is a bit 
 small, and if we were more conservative, we'd update the OS loader by writing 
 the new one to a temp directory rather than overwriting existing. And then 
 remove the old, and rename the new.
 
 The UEFI spec says if the system partition is FAT, it should be FAT32. For 
 removable media it's FAT12/FAT16. I don't know what tool the various distro 
 installers are using, but at least on Fedora they are using mkdosfs, part of 
 dosfstools. And its cutoff for making FAT16/FAT32 based on media size is 
 500MB unless otherwise specified, and the installer doesn't specify so 
 actually by default Fedora system partitions are FAT16, to no obvious ill 
 effect. But if you want a FAT32 ESP created by the installer the ESP needs to 
 be 500MiB or 525MB. So 550MB is a reasonable number to make that happen.
 
 If we were slightly smarter (and more A.R.), UEFI bugs aside, we'd put the 
 ESP as the last partition on the disk rather than as the first and then 
 honestly would we really care about consuming even 1GiB of the slowest part 
 of a spinning disk? Or causing a bit of overprovisioning for SSD? No. It's 
 probably a squeak of an improvement if anything.
 
 For those who want to use gummiboot, it calls for the kernel and initramfs to 
 be located on the ESP and is mounted at /boot rather than /boot/efi. So 
 that's also a reason to make it bigger than usual.
 
 
 
 
 sda3 = 1 TiB root partition (BTRFS), mounted on /
 sda4 = 6 GiB swap partition
 (that way I should be able to be compatible with both CSM or UEFI)

 B) normal Debian installation on sdas, activate the CSM on the
 motherboard and reboot.

 C) apt-get install grub-efi-amd64 and grub-install /dev/sda

 And the problems begin:
 1) grub-install doesn't give any error but using the --debug I can see
 that it is not using EFI.
 2) Ok I force with grub-install --target=x86_64-efi
 --efi-directory=/boot/efi --bootloader-id=grub --recheck --debug
 /dev/sda
 3) This time something is generated in /boot/efi: 
 /boot/efi/EFI/grub/grubx64.efi
 4) Copy the file /boot/efi/EFI/grub/grubx64.efi to
 /boot/efi/EFI/boot/bootx64.efi

 is EFI/boot/ correct here?
 
 If you want a fallback bootloader, yes.
 

 If you're lucky then your BIOS will tell what path it will try to read for 
 the boot code. For me that is /EFI/debian/grubx64.efi.
 
 NVRAM is what does this. But if NVRAM becomes corrupt, or the entry is 
 deleted for whatever reason, the proper fallback is bootarch.efi.

While this is what the UEFI spec says is supposed to be the fallback,
many systems don't actually look there unless the media is removable.
All of my UEFI systems instead look for Microsoft/Boot/bootmgfw.efi as
the fallback (Cause most x86 system designers don't care at all about
standards compliance as long as it will run windows).

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: BTRFS partitioning scheme (was BTRFS with RAID1 cannot boot when removing drive)

2014-02-13 Thread Chris Murphy

On Feb 13, 2014, at 11:02 AM, Austin S Hemmelgarn ahferro...@gmail.com wrote:
 
 While this is what the UEFI spec says is supposed to be the fallback,
 many systems don't actually look there unless the media is removable.
 All of my UEFI systems instead look for Microsoft/Boot/bootmgfw.efi as
 the fallback (Cause most x86 system designers don't care at all about
 standards compliance as long as it will run windows).

True, and they should be filed as bugs with the manufacturer, citing the UEFI 
spec as being violated. There's not much else to do in such cases, while 
considering they have a different fallback location: 
Microsoft/Boot/bootmgfw.efi.

Nevertheless, so long as NVRAM is behaving and isn't pilfered for other 
purposes, then you can specify a fallback entry there.

Chris Murphy--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: BTRFS partitioning scheme (was BTRFS with RAID1 cannot boot when removing drive)

2014-02-13 Thread Saint Germain
On Thu, 13 Feb 2014 10:43:08 -0700, Chris Murphy
li...@colorremedies.com wrote :
  sda3 = 1 TiB root partition (BTRFS), mounted on /
  sda4 = 6 GiB swap partition
  (that way I should be able to be compatible with both CSM or UEFI)
  
  B) normal Debian installation on sdas, activate the CSM on the
  motherboard and reboot.
  
  C) apt-get install grub-efi-amd64 and grub-install /dev/sda
  
  And the problems begin:
  1) grub-install doesn't give any error but using the --debug I
  can see that it is not using EFI.
  2) Ok I force with grub-install --target=x86_64-efi
  --efi-directory=/boot/efi --bootloader-id=grub --recheck --debug
  /dev/sda
  3) This time something is generated in /boot/efi:
  /boot/efi/EFI/grub/grubx64.efi
  4) Copy the file /boot/efi/EFI/grub/grubx64.efi to
  /boot/efi/EFI/boot/bootx64.efi
  
  
  is EFI/boot/ correct here?
  
  If you're lucky then your BIOS will tell what path it will try to
  read for the boot code. For me that is /EFI/debian/grubx64.efi.
  
  
  I followed the advices here (first result on Google with grub uefi
  debian): http://tanguy.ortolo.eu/blog/article51/debian-efi
  
  5) Reboot and disable the CSM on the motherboard
  6) No boot possible, I always go directly to the UEFI-BIOS
  
  I am currently stuck there. I read a lot of conflicting advises
  which doesn't work:
- use modprobe efivars and efibootmgr: not possible because I
  have not booted in EFI (chicken-egg problem)
  
  
  Not exactly. Boot in EFI mode into your favourite installer rescue
  mode, then chroot into the target filesystem and run efibootmgr
  there.
  
  
  In the end I managed to do it like this:
  1) Make a USB stick with FAT32 partition
  2) Install grub on it with:
  grub-install --target=x86_64-efi --efi-directory=/media/usb0
  --removable 3) Note on a paper the grub commands to start the
  kernel in /boot/grub/grub.cfg 3) Reboot, Disable CSM in the
  motherboard boot utility (BIOS?), Reboot with the USB stick
  connected 4) Normally it should have started on the USB stick grub
  command-line 5) Enter the necessary command to start the kernel (if
  you have some problem with video mode, use insmod efi_gop)
  6) Normally your operating system should start normally
  7) Check that efibootmgr is installed and working (normally efivars
  should be loaded in the modules already)
  8) grub-install --efi-directory=/boot/efi --recheck --debug
  (with the debug info you should see that it is using grub-efi and
  not grub-pc) 9) efibootmgr -c -d /dev/sda -p 2 -w -L Debian
  (GRUB) -l '\EFI\Debian\grubx64.efi'
  (replace -p 2 with yout correct ESP partition number)
  10) Reboot and enjoy !
 
 OK at least with GRUB 2.00 I never have to use any options with
 grub-install when installing to a chrooted system. It also even
 writes the proper entry into NVRAM for me, I don't have to use
 efibootmgr.

Yes you are right, this is probably unnecessary (see below).

 
 Also I've never had single \ work with efibootmgr from shell. I have
 to use \\. Try typing efibootmgr -v to see the actual entry you
 created and whether it has the \ in the path.
 

Here is the output:
BootCurrent: 0001
Timeout: 1 seconds
BootOrder: 0001,
Boot* debian
HD(2,7d8,106430,5d012c09-b70d-4225-ae18-9831f997c493)File(\EFI\debian\grubx64.efi)
Boot0001* Debian (GRUB) 
HD(2,7d8,106430,5d012c09-b70d-4225-ae18-9831f997c493)File(\EFI\Debian\grubx64.efi)

Ah the joy of FAT32 and the case sensitivity !
So it seems that grub-install automatically install the correct entry
 and using efibootmgr was unnecessary.
However it seems that single '\' works.

 But one thing that explains why the UEFI bootloading stuff is
 confusing for you is that every distro keeps their own grub patches.
 So there is very possibly a lot of difference between the downstream
 grub behaviors, and upstream.
 

Understood. That is why I took the step to describe what I did.
Perhaps it will be useful for others (most info on the topic was
not for Debian...).

Thanks again for your insights !
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: BTRFS partitioning scheme (was BTRFS with RAID1 cannot boot when removing drive)

2014-02-13 Thread Frank Kingswood

On 12/02/14 17:13, Saint Germain wrote:

Ok based on your advices, here is what I have done so far to use UEFI
(remeber that the objective is to have a clean and simple BTRFS RAID1
install).

A) I start first with only one drive, I have gone with the following
partition scheme (Debian wheezy, kernel 3.12, grub 2.00, GPT partition
with parted):
sda1 = 1MiB BIOS Boot partition (no FS, set 1 bios_grub on with
parted to set the type)
sda2 = 550 MiB EFI System Partition (FAT32, toggle 2 boot with
parted to set the type),  mounted on /boot/efi


I'm curious, why so big? There's only one file of about 100kb there, and 
I was considering shrinking mine to the minimum possible (which seems to 
be about 33 MB).



sda3 = 1 TiB root partition (BTRFS), mounted on /
sda4 = 6 GiB swap partition
(that way I should be able to be compatible with both CSM or UEFI)

B) normal Debian installation on sdas, activate the CSM on the
motherboard and reboot.

C) apt-get install grub-efi-amd64 and grub-install /dev/sda

And the problems begin:
1) grub-install doesn't give any error but using the --debug I can see
that it is not using EFI.
2) Ok I force with grub-install --target=x86_64-efi
--efi-directory=/boot/efi --bootloader-id=grub --recheck --debug
/dev/sda
3) This time something is generated in /boot/efi: /boot/efi/EFI/grub/grubx64.efi
4) Copy the file /boot/efi/EFI/grub/grubx64.efi to
/boot/efi/EFI/boot/bootx64.efi


 is EFI/boot/ correct here?

If you're lucky then your BIOS will tell what path it will try to read 
for the boot code. For me that is /EFI/debian/grubx64.efi.



5) Reboot and disable the CSM on the motherboard
6) No boot possible, I always go directly to the UEFI-BIOS

I am currently stuck there. I read a lot of conflicting advises which
doesn't work:
   - use modprobe efivars and efibootmgr: not possible because I have
not booted in EFI (chicken-egg problem)


Not exactly. Boot in EFI mode into your favourite installer rescue mode, 
then chroot into the target filesystem and run efibootmgr there.


Frank

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: BTRFS partitioning scheme (was BTRFS with RAID1 cannot boot when removing drive)

2014-02-13 Thread Saint Germain
On 13 February 2014 09:50, Frank Kingswood
fr...@kingswood-consulting.co.uk wrote:
 On 12/02/14 17:13, Saint Germain wrote:

 Ok based on your advices, here is what I have done so far to use UEFI
 (remeber that the objective is to have a clean and simple BTRFS RAID1
 install).

 A) I start first with only one drive, I have gone with the following
 partition scheme (Debian wheezy, kernel 3.12, grub 2.00, GPT partition
 with parted):
 sda1 = 1MiB BIOS Boot partition (no FS, set 1 bios_grub on with
 parted to set the type)
 sda2 = 550 MiB EFI System Partition (FAT32, toggle 2 boot with
 parted to set the type),  mounted on /boot/efi


 I'm curious, why so big? There's only one file of about 100kb there, and I
 was considering shrinking mine to the minimum possible (which seems to be
 about 33 MB).


It is quite difficult to find reliable information on this whole UEFI
boot with linux (info you can find for sure, but which ones to follow
? there are so many different info out there).

So I don't know if this 550 MiB is an urban legend or not, but you can
find several people recommending it and the reason why:
http://askubuntu.com/questions/336439/any-problems-with-this-partition-scheme
http://askubuntu.com/questions/287441/different-uses-of-term-efi-partition
https://bbs.archlinux.org/viewtopic.php?pid=1306753
http://forums.gentoo.org/viewtopic-p-7352214.html

Other people recommend around 200-300 MiB, so I basically took the
upper limit to see what happen.
If you have more reliable info on the topic I would be interested !

 sda3 = 1 TiB root partition (BTRFS), mounted on /
 sda4 = 6 GiB swap partition
 (that way I should be able to be compatible with both CSM or UEFI)

 B) normal Debian installation on sdas, activate the CSM on the
 motherboard and reboot.

 C) apt-get install grub-efi-amd64 and grub-install /dev/sda

 And the problems begin:
 1) grub-install doesn't give any error but using the --debug I can see
 that it is not using EFI.
 2) Ok I force with grub-install --target=x86_64-efi
 --efi-directory=/boot/efi --bootloader-id=grub --recheck --debug
 /dev/sda
 3) This time something is generated in /boot/efi:
 /boot/efi/EFI/grub/grubx64.efi
 4) Copy the file /boot/efi/EFI/grub/grubx64.efi to
 /boot/efi/EFI/boot/bootx64.efi


  is EFI/boot/ correct here?

 If you're lucky then your BIOS will tell what path it will try to read for
 the boot code. For me that is /EFI/debian/grubx64.efi.


I followed the advices here (first result on Google with grub uefi debian):
http://tanguy.ortolo.eu/blog/article51/debian-efi

 5) Reboot and disable the CSM on the motherboard
 6) No boot possible, I always go directly to the UEFI-BIOS

 I am currently stuck there. I read a lot of conflicting advises which
 doesn't work:
- use modprobe efivars and efibootmgr: not possible because I have
 not booted in EFI (chicken-egg problem)


 Not exactly. Boot in EFI mode into your favourite installer rescue mode,
 then chroot into the target filesystem and run efibootmgr there.


In the end I managed to do it like this:
1) Make a USB stick with FAT32 partition
2) Install grub on it with:
grub-install --target=x86_64-efi --efi-directory=/media/usb0 --removable
3) Note on a paper the grub commands to start the kernel in /boot/grub/grub.cfg
3) Reboot, Disable CSM in the motherboard boot utility (BIOS?), Reboot
with the USB stick connected
4) Normally it should have started on the USB stick grub command-line
5) Enter the necessary command to start the kernel (if you have some
problem with video mode, use insmod efi_gop)
6) Normally your operating system should start normally
7) Check that efibootmgr is installed and working (normally efivars
should be loaded in the modules already)
8) grub-install --efi-directory=/boot/efi --recheck --debug
(with the debug info you should see that it is using grub-efi and not grub-pc)
9) efibootmgr -c -d /dev/sda -p 2 -w -L Debian (GRUB) -l
'\EFI\Debian\grubx64.efi'
(replace -p 2 with yout correct ESP partition number)
10) Reboot and enjoy !

I made a lot of mistakes during these steps. The good thing is that
error are quire verbose, so you can easily see what is going wrong.
I hope that it will be easier for the next Debian user.

So now I can continue on this BTRFS RAID1 adventure... Let's see if my
setup is resilient to a hard drive failure.

Thanks for the help. Most comments here are quite on the spot and
reliable, that is very helpful !
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: BTRFS partitioning scheme (was BTRFS with RAID1 cannot boot when removing drive)

2014-02-13 Thread Chris Murphy

On Feb 13, 2014, at 1:50 AM, Frank Kingswood fr...@kingswood-consulting.co.uk 
wrote:

 On 12/02/14 17:13, Saint Germain wrote:
 Ok based on your advices, here is what I have done so far to use UEFI
 (remeber that the objective is to have a clean and simple BTRFS RAID1
 install).
 
 A) I start first with only one drive, I have gone with the following
 partition scheme (Debian wheezy, kernel 3.12, grub 2.00, GPT partition
 with parted):
 sda1 = 1MiB BIOS Boot partition (no FS, set 1 bios_grub on with
 parted to set the type)
 sda2 = 550 MiB EFI System Partition (FAT32, toggle 2 boot with
 parted to set the type),  mounted on /boot/efi
 
 I'm curious, why so big? There's only one file of about 100kb there, and I 
 was considering shrinking mine to the minimum possible (which seems to be 
 about 33 MB).

I'm not sure what OS loader you're using but I haven't seen a grubx64.efi less 
than ~500KB. In general I'm seeing it at about 1MB. The Fedora grub-efi and 
shim packages as installed on the ESP take up 10MB. So 33MiB is a bit small, 
and if we were more conservative, we'd update the OS loader by writing the new 
one to a temp directory rather than overwriting existing. And then remove the 
old, and rename the new.

The UEFI spec says if the system partition is FAT, it should be FAT32. For 
removable media it's FAT12/FAT16. I don't know what tool the various distro 
installers are using, but at least on Fedora they are using mkdosfs, part of 
dosfstools. And its cutoff for making FAT16/FAT32 based on media size is 500MB 
unless otherwise specified, and the installer doesn't specify so actually by 
default Fedora system partitions are FAT16, to no obvious ill effect. But if 
you want a FAT32 ESP created by the installer the ESP needs to be 500MiB or 
525MB. So 550MB is a reasonable number to make that happen.

If we were slightly smarter (and more A.R.), UEFI bugs aside, we'd put the ESP 
as the last partition on the disk rather than as the first and then honestly 
would we really care about consuming even 1GiB of the slowest part of a 
spinning disk? Or causing a bit of overprovisioning for SSD? No. It's probably 
a squeak of an improvement if anything.

For those who want to use gummiboot, it calls for the kernel and initramfs to 
be located on the ESP and is mounted at /boot rather than /boot/efi. So that's 
also a reason to make it bigger than usual.




 sda3 = 1 TiB root partition (BTRFS), mounted on /
 sda4 = 6 GiB swap partition
 (that way I should be able to be compatible with both CSM or UEFI)
 
 B) normal Debian installation on sdas, activate the CSM on the
 motherboard and reboot.
 
 C) apt-get install grub-efi-amd64 and grub-install /dev/sda
 
 And the problems begin:
 1) grub-install doesn't give any error but using the --debug I can see
 that it is not using EFI.
 2) Ok I force with grub-install --target=x86_64-efi
 --efi-directory=/boot/efi --bootloader-id=grub --recheck --debug
 /dev/sda
 3) This time something is generated in /boot/efi: 
 /boot/efi/EFI/grub/grubx64.efi
 4) Copy the file /boot/efi/EFI/grub/grubx64.efi to
 /boot/efi/EFI/boot/bootx64.efi
 
 is EFI/boot/ correct here?

If you want a fallback bootloader, yes.

 
 If you're lucky then your BIOS will tell what path it will try to read for 
 the boot code. For me that is /EFI/debian/grubx64.efi.

NVRAM is what does this. But if NVRAM becomes corrupt, or the entry is deleted 
for whatever reason, the proper fallback is bootarch.efi.



Chris Murphy

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: BTRFS partitioning scheme (was BTRFS with RAID1 cannot boot when removing drive)

2014-02-13 Thread Chris Murphy

On Feb 13, 2014, at 3:03 AM, Saint Germain saint...@gmail.com wrote:

 On 13 February 2014 09:50, Frank Kingswood
 fr...@kingswood-consulting.co.uk wrote:
 On 12/02/14 17:13, Saint Germain wrote:
 
 Ok based on your advices, here is what I have done so far to use UEFI
 (remeber that the objective is to have a clean and simple BTRFS RAID1
 install).
 
 A) I start first with only one drive, I have gone with the following
 partition scheme (Debian wheezy, kernel 3.12, grub 2.00, GPT partition
 with parted):
 sda1 = 1MiB BIOS Boot partition (no FS, set 1 bios_grub on with
 parted to set the type)
 sda2 = 550 MiB EFI System Partition (FAT32, toggle 2 boot with
 parted to set the type),  mounted on /boot/efi
 
 
 I'm curious, why so big? There's only one file of about 100kb there, and I
 was considering shrinking mine to the minimum possible (which seems to be
 about 33 MB).
 
 
 It is quite difficult to find reliable information on this whole UEFI
 boot with linux (info you can find for sure, but which ones to follow
 ? there are so many different info out there).
 
 So I don't know if this 550 MiB is an urban legend or not, but you can
 find several people recommending it and the reason why:
 http://askubuntu.com/questions/336439/any-problems-with-this-partition-scheme
 http://askubuntu.com/questions/287441/different-uses-of-term-efi-partition
 https://bbs.archlinux.org/viewtopic.php?pid=1306753
 http://forums.gentoo.org/viewtopic-p-7352214.html
 
 Other people recommend around 200-300 MiB, so I basically took the
 upper limit to see what happen.
 If you have more reliable info on the topic I would be interested !

Draw straws, flip a coin, I wouldn't stress out about it that much. Considering 
it's effectively impossible to resize a partition 1 ESP once there's a 
partition 2 right behind it, I think 550MB is totally sane in particular if you 
plan on experimenting with different EFI OS Loaders, or multibooting since all 
OS Loaders go on the ESP. If it's a single OS only system, yeah sure you can 
get by with a smaller ESP.

And yes the UEFI boot thing is like playing jacks (knucklebones). Many variants.



 
 sda3 = 1 TiB root partition (BTRFS), mounted on /
 sda4 = 6 GiB swap partition
 (that way I should be able to be compatible with both CSM or UEFI)
 
 B) normal Debian installation on sdas, activate the CSM on the
 motherboard and reboot.
 
 C) apt-get install grub-efi-amd64 and grub-install /dev/sda
 
 And the problems begin:
 1) grub-install doesn't give any error but using the --debug I can see
 that it is not using EFI.
 2) Ok I force with grub-install --target=x86_64-efi
 --efi-directory=/boot/efi --bootloader-id=grub --recheck --debug
 /dev/sda
 3) This time something is generated in /boot/efi:
 /boot/efi/EFI/grub/grubx64.efi
 4) Copy the file /boot/efi/EFI/grub/grubx64.efi to
 /boot/efi/EFI/boot/bootx64.efi
 
 
 is EFI/boot/ correct here?
 
 If you're lucky then your BIOS will tell what path it will try to read for
 the boot code. For me that is /EFI/debian/grubx64.efi.
 
 
 I followed the advices here (first result on Google with grub uefi debian):
 http://tanguy.ortolo.eu/blog/article51/debian-efi
 
 5) Reboot and disable the CSM on the motherboard
 6) No boot possible, I always go directly to the UEFI-BIOS
 
 I am currently stuck there. I read a lot of conflicting advises which
 doesn't work:
   - use modprobe efivars and efibootmgr: not possible because I have
 not booted in EFI (chicken-egg problem)
 
 
 Not exactly. Boot in EFI mode into your favourite installer rescue mode,
 then chroot into the target filesystem and run efibootmgr there.
 
 
 In the end I managed to do it like this:
 1) Make a USB stick with FAT32 partition
 2) Install grub on it with:
 grub-install --target=x86_64-efi --efi-directory=/media/usb0 --removable
 3) Note on a paper the grub commands to start the kernel in 
 /boot/grub/grub.cfg
 3) Reboot, Disable CSM in the motherboard boot utility (BIOS?), Reboot
 with the USB stick connected
 4) Normally it should have started on the USB stick grub command-line
 5) Enter the necessary command to start the kernel (if you have some
 problem with video mode, use insmod efi_gop)
 6) Normally your operating system should start normally
 7) Check that efibootmgr is installed and working (normally efivars
 should be loaded in the modules already)
 8) grub-install --efi-directory=/boot/efi --recheck --debug
 (with the debug info you should see that it is using grub-efi and not grub-pc)
 9) efibootmgr -c -d /dev/sda -p 2 -w -L Debian (GRUB) -l
 '\EFI\Debian\grubx64.efi'
 (replace -p 2 with yout correct ESP partition number)
 10) Reboot and enjoy !

OK at least with GRUB 2.00 I never have to use any options with grub-install 
when installing to a chrooted system. It also even writes the proper entry into 
NVRAM for me, I don't have to use efibootmgr.

Also I've never had single \ work with efibootmgr from shell. I have to use \\. 
Try typing efibootmgr -v to see 

Re: BTRFS partitioning scheme (was BTRFS with RAID1 cannot boot when removing drive)

2014-02-12 Thread Saint Germain
On 11 February 2014 19:15, Chris Murphy li...@colorremedies.com wrote:

 To summarize, I think I have 3 options for partitioning (I am not
 considering UEFI secure boot or swap):
 1) grub, BTRFS partition (i.e. full disk in BTRFS), /boot inside BTRFS 
 subvolume

 This doesn't seem like a good idea for a boot drive to be without partitions.


 2) grub, GPT partition, with (A) on sda1, and a BTRFS partition on
 sda2, /boot inside BTRFS subvolume
 3) grub, GPT partition, with (A) on sda1, /boot (ext4) on sda2, and a
 BTRFS on sda3

 (A) = BIOS Boot partition (1 MiB) or EFI System Partition (FAT32, 550MiB)

 I don't really see the point of having UEFI/ESP if I don't use other
 proprietary operating system, so I think I will go with (A) = BIOS
 Boot partition except if there is someting I have missed.

 You need to boot your system in UEFI and CSM-BIOS modes, and compare the 
 dmesg for each. I'm finding it common the CSM limits power management, and 
 relegates drives to IDE speeds rather than full SATA link speeds. Sometimes 
 it's unavoidable to use the CSM if it has better overall behavior for your 
 use case. I've found it to be lacking and have abandoned it. It's basically 
 intended for booting Windows XP, right?


Ok based on your advices, here is what I have done so far to use UEFI
(remeber that the objective is to have a clean and simple BTRFS RAID1
install).

A) I start first with only one drive, I have gone with the following
partition scheme (Debian wheezy, kernel 3.12, grub 2.00, GPT partition
with parted):
sda1 = 1MiB BIOS Boot partition (no FS, set 1 bios_grub on with
parted to set the type)
sda2 = 550 MiB EFI System Partition (FAT32, toggle 2 boot with
parted to set the type),  mounted on /boot/efi
sda3 = 1 TiB root partition (BTRFS), mounted on /
sda4 = 6 GiB swap partition
(that way I should be able to be compatible with both CSM or UEFI)

B) normal Debian installation on sdas, activate the CSM on the
motherboard and reboot.

C) apt-get install grub-efi-amd64 and grub-install /dev/sda

And the problems begin:
1) grub-install doesn't give any error but using the --debug I can see
that it is not using EFI.
2) Ok I force with grub-install --target=x86_64-efi
--efi-directory=/boot/efi --bootloader-id=grub --recheck --debug
/dev/sda
3) This time something is generated in /boot/efi: /boot/efi/EFI/grub/grubx64.efi
4) Copy the file /boot/efi/EFI/grub/grubx64.efi to
/boot/efi/EFI/boot/bootx64.efi
5) Reboot and disable the CSM on the motherboard
6) No boot possible, I always go directly to the UEFI-BIOS

I am currently stuck there. I read a lot of conflicting advises which
doesn't work:
  - use modprobe efivars and efibootmgr: not possible because I have
not booted in EFI (chicken-egg problem)
  - use update-grub or use grub-mkconfig (to generate
/boot/efi/grub/grub.cfg): no results
  - other exotic commands...
So I will try to upgrade to grub 2.02beta (as recommender by Chris
Murphy) but I am not sure that it will help. If someone has some
Debian experience on this UEFI install, please don't hesitate to
propose solutions !

I will continue to document this experience (hope that it will be
useful for others), and hope to get to the point where I can have a
good system in BTRFS RAID1 mode.
You have to be very motivated to get into this, It is really a challenge ! ;-)
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: BTRFS with RAID1 cannot boot when removing drive

2014-02-12 Thread Saint Germain
On 11 February 2014 21:35, Duncan 1i5t5.dun...@cox.net wrote:
 Saint Germain posted on Tue, 11 Feb 2014 11:04:57 +0100 as excerpted:

 The big problem I currently have is that based on your input, I hesitate
 a lot on my partitioning scheme: should I use a dedicated /boot
 partition or should I have one global BTRFS partition ?
 It is not very clear in the doc (a lof of people used a dedicated /boot
 because at that time, grub couldn't natively boot BTRFS it seems, but it
 has changed).
 Could you recommend a partitioning scheme for a simple RAID1 with 2
 identical hard drives (just for home computing, not business).

 FWIW... I'm planning to and have your previous message covering that
 still marked unread to reply to later.  But real life has temporarily
 been monopolizing my time so the last day or two I've only done
 relatively short and quick replies.  That one will require a bit more
 time to answer to my satisfaction.

 So I'm punting for the moment.  But FWIW I tend to be a reasonably heavy
 partitioner (tho nowhere near what I used to be), so a lot of folks will
 consider my setup somewhat extreme.  That's OK.  It's my computer, setup
 for my purposes, not their computer for theirs, and it works very well
 for me, so it's all good. =:^)

 But hopefully I'll get back to that with a longer reply by the end of the
 week.  If I don't, you can probably consider that monopoly lasting longer
 than I thought, and it could be that I'll never get back to properly
 reply.  But it's an interesting enough topic to me that I'll /probably/
 get back, just not right ATM.


No problem, I have started another thread where we discuss partitioning.
It may be slightly off-topic, but the intention is really to have a
partition BTRFS-friendly.
For instance it seems that a dedicated /boot partition instead of a
subvolume is not necessary (better to have the /boot in the RAID1).
However I'm no expert.

Thanks for your help.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: BTRFS partitioning scheme (was BTRFS with RAID1 cannot boot when removing drive)

2014-02-12 Thread Chris Murphy

On Feb 12, 2014, at 10:13 AM, Saint Germain saint...@gmail.com wrote:

 On 11 February 2014 19:15, Chris Murphy li...@colorremedies.com wrote:
 
 To summarize, I think I have 3 options for partitioning (I am not
 considering UEFI secure boot or swap):
 1) grub, BTRFS partition (i.e. full disk in BTRFS), /boot inside BTRFS 
 subvolume
 
 This doesn't seem like a good idea for a boot drive to be without partitions.
 
 
 2) grub, GPT partition, with (A) on sda1, and a BTRFS partition on
 sda2, /boot inside BTRFS subvolume
 3) grub, GPT partition, with (A) on sda1, /boot (ext4) on sda2, and a
 BTRFS on sda3
 
 (A) = BIOS Boot partition (1 MiB) or EFI System Partition (FAT32, 550MiB)
 
 I don't really see the point of having UEFI/ESP if I don't use other
 proprietary operating system, so I think I will go with (A) = BIOS
 Boot partition except if there is someting I have missed.
 
 You need to boot your system in UEFI and CSM-BIOS modes, and compare the 
 dmesg for each. I'm finding it common the CSM limits power management, and 
 relegates drives to IDE speeds rather than full SATA link speeds. Sometimes 
 it's unavoidable to use the CSM if it has better overall behavior for your 
 use case. I've found it to be lacking and have abandoned it. It's basically 
 intended for booting Windows XP, right?
 
 
 Ok based on your advices, here is what I have done so far to use UEFI
 (remeber that the objective is to have a clean and simple BTRFS RAID1
 install).
 
 A) I start first with only one drive, I have gone with the following
 partition scheme (Debian wheezy, kernel 3.12, grub 2.00, GPT partition
 with parted):
 sda1 = 1MiB BIOS Boot partition (no FS, set 1 bios_grub on with
 parted to set the type)
 sda2 = 550 MiB EFI System Partition (FAT32, toggle 2 boot with
 parted to set the type),  mounted on /boot/efi
 sda3 = 1 TiB root partition (BTRFS), mounted on /
 sda4 = 6 GiB swap partition
 (that way I should be able to be compatible with both CSM or UEFI)
 
 B) normal Debian installation on sdas, activate the CSM on the
 motherboard and reboot.
 
 C) apt-get install grub-efi-amd64 and grub-install /dev/sda
 
 And the problems begin:
 1) grub-install doesn't give any error but using the --debug I can see
 that it is not using EFI.


Expected. If you boot with CSM enabled, you get a BIOS boot.



 2) Ok I force with grub-install --target=x86_64-efi
 --efi-directory=/boot/efi --bootloader-id=grub --recheck --debug
 /dev/sda
 3) This time something is generated in /boot/efi: 
 /boot/efi/EFI/grub/grubx64.efi
 4) Copy the file /boot/efi/EFI/grub/grubx64.efi to
 /boot/efi/EFI/boot/bootx64.efi

I wouldn't expect this grubx64.efi to work because efivars and access to NVRAM 
simply doesn't exist if you are CSM booted.

 5) Reboot and disable the CSM on the motherboard
 6) No boot possible, I always go directly to the UEFI-BIOS

Boot CSM mode, grub-install /dev/sda will install a BIOS grub into the BIOS 
Boot partition. It's fine for the fstab to still mount the ESP to /boot/efi - 
it's ignored.

Boot UEFI mode, grub-install will install a UEFI grub to the ESP.



 I am currently stuck there. I read a lot of conflicting advises which
 doesn't work:
  - use modprobe efivars and efibootmgr: not possible because I have
 not booted in EFI (chicken-egg problem)
  - use update-grub or use grub-mkconfig (to generate
 /boot/efi/grub/grub.cfg): no results
  - other exotic commands…

Hmm I'm not certain whether the grub.cfg produced by grub-mkconfig is the same 
in UEFI vs CSM modes. Boot in both modes, create grub.cfg in each, then diff 
them.

However, I think grub.cfg on the ESP is a mistake because for bootable raid1 to 
be resilient in the face of one disk failure, you need some way to always keep 
each disk's ESP synchronized. It's better for he grub.cfg to be on Btrfs raid1 
/boot/grub/grub.cfg. 

It's a separate topic for the grub list but I'm finding the idea of a 
persistently rw mounted ESP to be troubling. This isn't done on Windows or OS 
X, I don't know why this convention is used on Linux. The firmware doesn't 
mount it per se, but it's read only in any case, and the OS loader doesn't need 
the ESP mounted rw either, nor should it.

 
 I will continue to document this experience (hope that it will be
 useful for others), and hope to get to the point where I can have a
 good system in BTRFS RAID1 mode.
 You have to be very motivated to get into this, It is really a challenge ! ;-)

Bootable raid1 on UEFI on grub-devel@
http://lists.gnu.org/archive/html/grub-devel/2014-02/msg2.html

I think it's just not fully considered yet.



Chris Murphy

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: BTRFS with RAID1 cannot boot when removing drive

2014-02-11 Thread Saint Germain
On 11 February 2014 07:59, Duncan 1i5t5.dun...@cox.net wrote:
 Saint Germain posted on Tue, 11 Feb 2014 04:15:27 +0100 as excerpted:

 Ok I need to really understand how my motherboard works (new Z87E-ITX).
 It is written 64Mb AMI UEFI Legal BIOS, so I thought it was really
 UEFI.

 I expect it's truly UEFI.  But from what I've read most UEFI based
 firmware(possibly all in theory, with the caveat that there's bugs and
 some might not actually work as intended due to bugs) on x86/amd64 (as
 opposed to arm) has a legacy-BIOS mode fallback.  Provided it's not in
 secure-boot mode, if the storage devices it is presented don't have a
 valid UEFI config, it'll fall back to legacy-BIOS mode and try to detect
 and boot that.

 Which may or may not be what your system is actually doing.  As I said,
 since I've not actually experimented with UEFI here, my practical
 knowledge on it is virtually nil, and I don't claim to have studied the
 theory well enough to deduce in that level of detail what your system is
 doing.  But I know that's how it's /supposed/ to be able to work. =:^)


Hello Duncan;

Yes I also suspect something like that. Unfortunately it is not really
clear on their website how it works.
You can find a lot of marketing stuff, but not what is really needed
to boot properly !

 (FWIW, what I /have/ done, deliberately, is read enough about UEFI to
 have a general feel for it, and to have been previously exposed to the
 ideas for some time, so that once I /do/ have it available and decide
 it's time, I'll be able to come up to speed relatively quickly as I've
 had the general ideas turning over in my head for quite some time
 already, so in effect I'll simply be reviewing the theory and doing the
 lab work, while concurrently making logical connections about how it all
 fits together that only happen once one actually does that lab work.
 I've discovered over the years that this is perhaps my most effective way
 to learn, read about the general principles while not really
 understanding it the first time thru, then come back to it some months or
 years later, and I pick it up real fast, because my subconscious has been
 working on the problem the whole time! Come to think of it, that's
 actually how I handled btrfs, too, trying it at one point and deciding it
 didn't fit my needs at the time, leaving it for awhile, then coming back
 to it later when my needs had changed, but I already had an idea what I
 was doing from the previous try, with the result being I really took to
 it fast, the second time!  =:^)


I'll try to keep that in mind !

 I understand. Normally the swap will only be used for hibernating. I
 don't expect to use it except perhaps in some extreme case.

 If hibernate is your main swap usage, you might consider the noauto fstab
 option as well, then specifically swapon the appropriate one in your
 hibernate script since you may well need logic in there to figure out
 which one to use in any case.  I was doing that for awhile.

 (I've run my own suspend/hibernate scripts based on the documentation in
 $KERNDIR/Documentation/power/*, for years.  The kernel's docs dir really
 is a great resource for a lot of sysadmin level stuff as well as the
 expected kernel developer stuff.  I think few are aware of just how much
 real useful admin-level information it actually contains. =:^)

I am not so used to working without swap. I've worked for year with a
computer with low RAM and a swap and I didn't have any problem (even
when doing some RAM intensive task). So I haven't tried to fix it ;-)
If there is sufficient RAM, I suppose that the the swap doesn't get
used so it is not a problem to let it ?
I've hesitated a long time between ZFS and BTRFS, and one of the case
for ZFS was that it can handle natively swap in its subvolume (and
so in theory it can enter in the RAID1 as well). However the folks at
ZFS seem also to consider also swap as a relic of the past. I guess I
will keep it just in case. ;-)

The big problem I currently have is that based on your input, I
hesitate a lot on my partitioning scheme: should I use a dedicated
/boot partition or should I have one global BTRFS partition ?
It is not very clear in the doc (a lof of people used a dedicated
/boot because at that time, grub couldn't natively boot BTRFS it
seems, but it has changed).
Could you recommend a partitioning scheme for a simple RAID1 with 2
identical hard drives (just for home computing, not business).

Many thanks !
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


BTRFS partitioning scheme (was BTRFS with RAID1 cannot boot when removing drive)

2014-02-11 Thread Saint Germain
Hello and thanks for your feedback !

Cc back to the mailing-list as it may be of interest here as well.

On 11 February 2014 16:11, Kyle Gates kylega...@hotmail.com wrote:
 The big problem I currently have is that based on your input, I
 hesitate a lot on my partitioning scheme: should I use a dedicated
 /boot partition or should I have one global BTRFS partition ?
 It is not very clear in the doc (a lof of people used a dedicated
 /boot because at that time, grub couldn't natively boot BTRFS it
 seems, but it has changed).
 Could you recommend a partitioning scheme for a simple RAID1 with 2
 identical hard drives (just for home computing, not business).

 I run a 1GiB RAID1 btrfs /boot in mixed mode with grub2 and gpt partitions.
 IIRC grub2 doesn't understand lzo compression nor subvolumes.


Well I did tried to read about this and ended up being confused
because development is so fast, documentation can become quickly
outdated.
It seems that grub can boot BTRFS /boot subvolumes:
https://bbs.archlinux.org/viewtopic.php?pid=1222358

However Chris Murphy has some problems a few months ago:
http://comments.gmane.org/gmane.comp.file-systems.btrfs/29140

So I still don't know if it is a good idea or not to have a BTRFS /boot ?
Of course the idea is that I would like to snapshot /boot and have it on RAID1.

To summarize, I think I have 3 options for partitioning (I am not
considering UEFI secure boot or swap):
1) grub, BTRFS partition (i.e. full disk in BTRFS), /boot inside BTRFS subvolume
2) grub, GPT partition, with (A) on sda1, and a BTRFS partition on
sda2, /boot inside BTRFS subvolume
3) grub, GPT partition, with (A) on sda1, /boot (ext4) on sda2, and a
BTRFS on sda3

(A) = BIOS Boot partition (1 MiB) or EFI System Partition (FAT32, 550MiB)

I don't really see the point of having UEFI/ESP if I don't use other
proprietary operating system, so I think I will go with (A) = BIOS
Boot partition except if there is someting I have missed.

Can someone recommend which one would be the most stable and easier to manage ?

Thanks in advance,
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: BTRFS with RAID1 cannot boot when removing drive

2014-02-11 Thread Chris Murphy

On Feb 10, 2014, at 8:15 PM, Saint Germain saint...@gmail.com wrote:
 
 Ok I need to really understand how my motherboard works (new Z87E-ITX).
 It is written 64Mb AMI UEFI Legal BIOS, so I thought it was really
 UEFI.

Manufacturers have done us a disservice by equating UEFI and BIOS. Some UEFI 
also have a compatibility support module (CSM) which presents a BIOS to the 
operating system. It's intended for legacy operating systems that won't ever 
directly support UEFI.

A way to tell in linux if you're booting with or without the CSM is issue the 
efibootmgr command. If it return something that looks like an error message, 
the CSM is enabled and the OS thinks it's running on a BIOS computer. If it 
returns some numbered list then the CSM isn't enabled and the OS thinks it's 
running on a UEFI computer.

 /dev/sdb has the same partition as /dev/sda.

grub-install device shouldn't work on UEFI because the only place 
grub-install installs is to the volume mounted at /boot/efi. And also 
grub-install /dev/sdb implies installing grub to a disk boot sector, which also 
isn't applicable to UEFI.


 I understand. Normally the swap will only be used for hibernating. I
 don't expect to use it except perhaps in some extreme case.

I consider hibernate fundamentally broken right now because whether it'll work 
depends on too many things. It works for some people and not others, and for 
those it does work it largely didn't work out of the box. It never worked for 
me and did induce Btrfs corruptions so I've just given up on hibernate 
entirely. There's a long old Fedora thread that discusses myriad issues about 
it: https://bugzilla.redhat.com/show_bug.cgi?id=781749 and shows even if it's 
working, it can stop working without warning after X number of hibernate-resume 
cycles.

Chris Murphy--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: UEFI/BIOS, was: BTRFS with RAID1 cannot boot when removing drive

2014-02-11 Thread Chris Murphy

On Feb 10, 2014, at 11:59 PM, Duncan 1i5t5.dun...@cox.net wrote:

 Saint Germain posted on Tue, 11 Feb 2014 04:15:27 +0100 as excerpted:
 
 Ok I need to really understand how my motherboard works (new Z87E-ITX).
 It is written 64Mb AMI UEFI Legal BIOS, so I thought it was really
 UEFI.
 
 I expect it's truly UEFI.  But from what I've read most UEFI based 
 firmware(possibly all in theory, with the caveat that there's bugs and 
 some might not actually work as intended due to bugs) on x86/amd64 (as 
 opposed to arm) has a legacy-BIOS mode fallback.  Provided it's not in 
 secure-boot mode, if the storage devices it is presented don't have a 
 valid UEFI config, it'll fall back to legacy-BIOS mode and try to detect 
 and boot that.

There are UEFI implementations that behave this way with respect to removable 
and optical devices, they'll try to boot UEFI mode first, and then fallback to 
BIOS. I've yet to find one that does this for a HDD although maybe it exists. 
What I've seen is the NVRAM has a boot option that expressly calls for booting 
a particular device with CSM-BIOS mode boot, or the user has to go into 
firmware setup and enable the CSM which does so for all boots. This option is 
sometimes hideously labeled as disable UEFI. There are some (probably rare) 
UEFI firmware implementations without a CSM, the only two I can think of off 
hand are Itanium computers, and Apple's intel rack servers (since 
discontinued). But CSM isn't a UEFI requirement so there may be other cases 
where manufacturers have decided to go with only EFI or BIOS.

Chris Murphy

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: BTRFS with RAID1 cannot boot when removing drive

2014-02-11 Thread Saint Germain
On 11 February 2014 18:21, Chris Murphy li...@colorremedies.com wrote:

 On Feb 10, 2014, at 8:15 PM, Saint Germain saint...@gmail.com wrote:

 Ok I need to really understand how my motherboard works (new Z87E-ITX).
 It is written 64Mb AMI UEFI Legal BIOS, so I thought it was really
 UEFI.

 Manufacturers have done us a disservice by equating UEFI and BIOS. Some UEFI 
 also have a compatibility support module (CSM) which presents a BIOS to the 
 operating system. It's intended for legacy operating systems that won't ever 
 directly support UEFI.

 A way to tell in linux if you're booting with or without the CSM is issue the 
 efibootmgr command. If it return something that looks like an error message, 
 the CSM is enabled and the OS thinks it's running on a BIOS computer. If it 
 returns some numbered list then the CSM isn't enabled and the OS thinks it's 
 running on a UEFI computer.


Nice trick ! Thanks.

 /dev/sdb has the same partition as /dev/sda.

 grub-install device shouldn't work on UEFI because the only place 
 grub-install installs is to the volume mounted at /boot/efi. And also 
 grub-install /dev/sdb implies installing grub to a disk boot sector, which 
 also isn't applicable to UEFI.


I am still not up to date on UEFI partition and so.
But I have read these pages:
http://tanguy.ortolo.eu/blog/article51/debian-efi
http://forums.debian.net/viewtopic.php?f=16t=81120
And apparently is grub-install device is the correct command (but
you have to copy file in addition).
It is maybe because they use a special package grub-efi-amd64, which
replace the grub-install.
It is quite difficult to find reliable info on the topic...


 I understand. Normally the swap will only be used for hibernating. I
 don't expect to use it except perhaps in some extreme case.

 I consider hibernate fundamentally broken right now because whether it'll 
 work depends on too many things. It works for some people and not others, and 
 for those it does work it largely didn't work out of the box. It never worked 
 for me and did induce Btrfs corruptions so I've just given up on hibernate 
 entirely. There's a long old Fedora thread that discusses myriad issues about 
 it: https://bugzilla.redhat.com/show_bug.cgi?id=781749 and shows even if it's 
 working, it can stop working without warning after X number of 
 hibernate-resume cycles.


I am among the fortunate to have a working hibernate out of the box
(Debian stable) and it works reliably (even it ultimately it WILL fail
after 20-30 iterations). So I use the feature to save on electricity
cost ;-)
But yes, maybe I will get rid of the swap...

Thanks !
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: BTRFS partitioning scheme (was BTRFS with RAID1 cannot boot when removing drive)

2014-02-11 Thread Chris Murphy

On Feb 11, 2014, at 10:02 AM, Saint Germain saint...@gmail.com wrote:

 Hello and thanks for your feedback !
 
 Cc back to the mailing-list as it may be of interest here as well.
 
 On 11 February 2014 16:11, Kyle Gates kylega...@hotmail.com wrote:
 The big problem I currently have is that based on your input, I
 hesitate a lot on my partitioning scheme: should I use a dedicated
 /boot partition or should I have one global BTRFS partition ?
 It is not very clear in the doc (a lof of people used a dedicated
 /boot because at that time, grub couldn't natively boot BTRFS it
 seems, but it has changed).
 Could you recommend a partitioning scheme for a simple RAID1 with 2
 identical hard drives (just for home computing, not business).
 
 I run a 1GiB RAID1 btrfs /boot in mixed mode with grub2 and gpt partitions.
 IIRC grub2 doesn't understand lzo compression nor subvolumes.

Any sufficiently recent version of GRUB2 understands both lzo, zlib. 

Grub doesn't directly understand the concept of subvolumes, but it treats them 
as directories, which is expected to work.

grub-mkconfig likewise is indirectly aware of subvolumes. If fstab indicates / 
is on Btrfs and there's a subvol= option, grub-mkconfig creates a 
rootflags=subvol= boot parameter in the resulting grub.cfg.


 Well I did tried to read about this and ended up being confused
 because development is so fast, documentation can become quickly
 outdated.
 It seems that grub can boot BTRFS /boot subvolumes:
 https://bbs.archlinux.org/viewtopic.php?pid=1222358

I think it's more accurate to say it can locate boot files on a subvolume by 
treating the subvolume as a directory. So as long as the path to those boot 
files is correct, it should work.

And yes it's true that distros are frequently very far behind with grub, and 
this poses problems. Fedora has been on grub 2.00 since release two years ago. 
Many other distros are still using pre-release grub2 versions 1.97 through 1.99 
which isn't good. The distros should be pressured to move to grub 2.02, 
currently in beta, upon release. And I think it would be good for Btrfs testers 
to build grub 2.02 beta, and try to break it with various Btrfs configurations 
so that it can be better tested grub release.



 However Chris Murphy has some problems a few months ago:
 http://comments.gmane.org/gmane.comp.file-systems.btrfs/29140

The decision by Grub upstream is that Grub should always treat pathnames as 
starting from the top level of the file system, ID5, rather than starting from 
the user defined default subvolume. Otherwise, the user can instantly render 
their system unbootable just by changing the default subvolume.

subvol= option is intended to be read as starting from the top level of the 
file system regardless of the default subvolume. Probably more testing of this 
is needed, along with grub2, and other boot loaders.


 
 So I still don't know if it is a good idea or not to have a BTRFS /boot ?
 Of course the idea is that I would like to snapshot /boot and have it on 
 RAID1.

It's fine to do that, and it does work. You can also just have /boot as a 
directory within a root subvolume.


 
 To summarize, I think I have 3 options for partitioning (I am not
 considering UEFI secure boot or swap):
 1) grub, BTRFS partition (i.e. full disk in BTRFS), /boot inside BTRFS 
 subvolume

This doesn't seem like a good idea for a boot drive to be without partitions.


 2) grub, GPT partition, with (A) on sda1, and a BTRFS partition on
 sda2, /boot inside BTRFS subvolume
 3) grub, GPT partition, with (A) on sda1, /boot (ext4) on sda2, and a
 BTRFS on sda3
 
 (A) = BIOS Boot partition (1 MiB) or EFI System Partition (FAT32, 550MiB)
 
 I don't really see the point of having UEFI/ESP if I don't use other
 proprietary operating system, so I think I will go with (A) = BIOS
 Boot partition except if there is someting I have missed.

You need to boot your system in UEFI and CSM-BIOS modes, and compare the dmesg 
for each. I'm finding it common the CSM limits power management, and relegates 
drives to IDE speeds rather than full SATA link speeds. Sometimes it's 
unavoidable to use the CSM if it has better overall behavior for your use case. 
I've found it to be lacking and have abandoned it. It's basically intended for 
booting Windows XP, right?

Chris Murphy--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: BTRFS with RAID1 cannot boot when removing drive

2014-02-11 Thread Chris Murphy

On Feb 11, 2014, at 10:36 AM, Saint Germain saint...@gmail.com wrote:
 
 grub-install device shouldn't work on UEFI because the only place 
 grub-install installs is to the volume mounted at /boot/efi. And also 
 grub-install /dev/sdb implies installing grub to a disk boot sector, which 
 also isn't applicable to UEFI.
 
 
 I am still not up to date on UEFI partition and so.
 But I have read these pages:
 http://tanguy.ortolo.eu/blog/article51/debian-efi
 http://forums.debian.net/viewtopic.php?f=16t=81120
 And apparently is grub-install device is the correct command (but
 you have to copy file in addition).
 It is maybe because they use a special package grub-efi-amd64, which
 replace the grub-install.
 It is quite difficult to find reliable info on the topic…

grub-install device probably works insofar as device is being ignored. I 
bet you get the same results no matter what device you choose, but maybe I'm 
mistaken and they have some way to write to an unmounted ESP which seems like 
not such a great idea.


Chris Murphy

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: BTRFS with RAID1 cannot boot when removing drive

2014-02-11 Thread Duncan
Saint Germain posted on Tue, 11 Feb 2014 11:04:57 +0100 as excerpted:

 The big problem I currently have is that based on your input, I hesitate
 a lot on my partitioning scheme: should I use a dedicated /boot
 partition or should I have one global BTRFS partition ?
 It is not very clear in the doc (a lof of people used a dedicated /boot
 because at that time, grub couldn't natively boot BTRFS it seems, but it
 has changed).
 Could you recommend a partitioning scheme for a simple RAID1 with 2
 identical hard drives (just for home computing, not business).

FWIW... I'm planning to and have your previous message covering that 
still marked unread to reply to later.  But real life has temporarily 
been monopolizing my time so the last day or two I've only done 
relatively short and quick replies.  That one will require a bit more 
time to answer to my satisfaction.

So I'm punting for the moment.  But FWIW I tend to be a reasonably heavy 
partitioner (tho nowhere near what I used to be), so a lot of folks will 
consider my setup somewhat extreme.  That's OK.  It's my computer, setup 
for my purposes, not their computer for theirs, and it works very well 
for me, so it's all good. =:^)

But hopefully I'll get back to that with a longer reply by the end of the 
week.  If I don't, you can probably consider that monopoly lasting longer 
than I thought, and it could be that I'll never get back to properly 
reply.  But it's an interesting enough topic to me that I'll /probably/ 
get back, just not right ATM.

-- 
Duncan - List replies preferred.   No HTML msgs.
Every nonfree program has a lord, a master --
and if you use the program, he is your master.  Richard Stallman

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: BTRFS partitioning scheme (was BTRFS with RAID1 cannot boot when removing drive)

2014-02-11 Thread Duncan
Chris Murphy posted on Tue, 11 Feb 2014 11:15:56 -0700 as excerpted:

 The distros should be pressured to move to grub 2.02, currently in beta,
 upon release. And I think it would be good for Btrfs testers to build
 grub 2.02 beta, and try to break it with various Btrfs configurations so
 that it can be better tested grub release.

On the topic of grub minor update releases (and betas too), do you know 
where I can get a good grub-script-level sysadmin view of what has 
actually changed?

I'm running the betas, but a bit blind as the only update info I've been 
able to find is per-commit granularity, many commits per day, and I can't 
see the forest for all the trees!  I'd be far more comfortable with 
something rather more fuzzy than individual commits, but not so fuzzy and 
opaque as simply version number bumps.

This is of concern to me in part because I directly edit grub.cfg and 
includes, because the meta-level config helpers that are supposedly 
intended for mere mortals were far more fog than help to me, and I both 
couldn't get done what I wanted to do, and I needed to use the direct 
grub-script level commands anyway, and once I understood them, all the 
extra noise from the meta-level stuff was simply noisy cruft that 
obscured the real operation.  

Since I'm working at the direct grub-script level, I really need that 
level of detail on changes in updates as well.  Not the high-level-meta-
config stuff, not the per-commit stuff, something in between, analogous 
to the grub-script level but in a changelog.

And I need it for the betas as well as the full releases, since I'm 
running the betas.  Which means a changelog with the betas included so I 
could see what changed between betas as well as between general releases 
would be best.

-- 
Duncan - List replies preferred.   No HTML msgs.
Every nonfree program has a lord, a master --
and if you use the program, he is your master.  Richard Stallman

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: BTRFS with RAID1 cannot boot when removing drive

2014-02-10 Thread Chris Murphy

On Feb 9, 2014, at 2:40 PM, Saint Germain saint...@gmail.com wrote:
 
 Then I added another drive for a RAID1 configuration (with btrfs
 balance) and I installed grub on the second hard drive with
 grub-install /dev/sdb.

That can't work on UEFI. UEFI firmware effectively requires a GPT partition map 
and something to serve as an EFI System partition on all bootable drives.

Second there's a difference between UEFI with and without secure boot.

With secure boot you need to copy the files your distro installer puts on the 
target drive EFI System partition to each addition drive's ESP if you want 
multibooting to work in case of disk failure. The grub on each ESP likely looks 
on only its own ESP for a grub.cfg. So that then means having to sync 
grub.cfg's among each disk used for booting. A way around this is to create a 
single grub.cfg that merely forwards to the true grub.cfg. And you can copy 
this forward-only grub.cfg to each ESP. That way the ESP's never need updating 
or syncing again.

Without secure boot, you must umount /boot/efi and mount the ESP for each 
bootable disk is turn, and then merely run:

grub-install

That will cause a core.img to be created for that particular ESP, and it will 
point to the usual grub.cfg location at /boot/grub.



 
 If I boot on sdb, it takes sda1 as the root filesystem
 If I switched the cable, it always take the first hard drive as the
 root filesystem (now sdb)
 If I disconnect /dev/sda, the system doesn't boot with a message
 saying that it hasn't found the UUID:
 
 Scanning for BTRFS filesystems...
 mount: mounting /dev/disk/by-uuid/c64fca2a-5700-4cca-abac-3a61f2f7486c on 
 /root failed: Invalid argument

Well if /dev/sda is missing, and you have an unpartitioned /dev/sdb I don't 
even know how you're getting this far, and it seems like the UEFI computer 
might actually be booting in CSM-BIOS mode which presents a conventional BIOS 
to the operating system. Disintguishing such things gets messy quickly.


 
 Can you tell me what I have done incorrectly ?
 Is it because of UEFI ? If yes I haven't understood how I can correct
 it in a simple way.
 
 As extra question, I don't see also how I can configure the system to
 get the correct swap in case of disk failure. Should I force both swap 
 partition
 to have the same UUID ?

If you're really expecting to create a system that can accept a disk failure 
and continue to work, I don't see how it can depend on swap partitions. It's 
fine to create them, but just realize if they're actually being used and the 
underlying physical device dies, the kernel isn't going to like it.

A possible work around is using an md raid1 partition as swap.


Chris Murphy--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: BTRFS with RAID1 cannot boot when removing drive

2014-02-10 Thread Saint Germain
Hello Duncan,

What an amazing extensive answer you gave me !
Thank you so much for it.

See my comments below.

On Mon, 10 Feb 2014 03:34:49 + (UTC), Duncan 1i5t5.dun...@cox.net
wrote :

  I am experimenting with BTRFS and RAID1 on my Debian Wheezy (with
  backported kernel 3.12-0.bpo.1-amd64) using a a motherboard with
  UEFI.
 
 My systems don't do UEFI, but I do run GPT partitions and use grub2
 for booting, with grub2-core installed to a BIOS/reserved type
 partition (instead of as an EFI service as it would be with UEFI).
 And I have root filesystem btrfs two-device raid1 mode working fine
 here, tested bootable with only one device of the two available.
 
 So while I can't help you directly with UEFI, I know the rest of it
 can/ does work.
 
 One more thing:  I do have a (small) separate btrfs /boot, actually
 two of them as I setup a separate /boot on each of the two devices in
 ordered to have a backup /boot, since grub can only point to
 one /boot by default, and while pointing to another in grub's rescue
 mode is possible, I didn't want to have to deal with that if the
 first /boot was corrupted, as it's easier to simply point the BIOS at
 a different drive entirely and load its (independently installed and
 configured) grub and /boot.
 

Can you explain why you choose to have a dedicated /boot partition ?
I also read on this thread that it may be better to have a
dedicated /boot partition:
https://bbs.archlinux.org/viewtopic.php?pid=1342893#p1342893


  However I haven't managed to make the system boot when the removing
  the first hard drive.
  
  I have installed Debian with the following partition on the first
  hard drive (no BTRFS subsystem):
  /dev/sda1: for / (BTRFS)
  /dev/sda2: for /home (BTRFS)
  /dev/sda3: for swap
  
  Then I added another drive for a RAID1 configuration (with btrfs
  balance) and I installed grub on the second hard drive with
  grub-install /dev/sdb.
 
 Just for clarification as you don't mention it specifically, altho
 your btrfs filesystem show information suggests you did it this way,
 are your partition layouts identical on both drives?
 
 That's what I've done here, and I definitely find that easiest to
 manage and even just to think about, tho it's definitely not a
 requirement.  But using different partition layouts does
 significantly increase management complexity, so it's useful to avoid
 if possible. =:^)

Yes, the partition layout is exactly the same on both drive (copied
with sfdisk). I also try to keep things simple ;-)

  If I boot on sdb, it takes sda1 as the root filesystem
 
  If I switched the cable, it always take the first hard drive as
  the root filesystem (now sdb)
 
 That's normal /appearance/, but that /appearance/ doesn't fully
 reflect reality.
 
 The problem is that mount output (and /proc/self/mounts), fstab, etc, 
 were designed with single-device filesystems in mind, and
 multi-device btrfs has to be made to fix the existing rules as best
 it can.
 
 So what's actually happening is that the for a btrfs composed of
 multiple devices, since there's only one device slot for the kernel
 to list devices, it only displays the first one it happens to come
 across, even tho the filesystem will normally (unless degraded)
 require that all component devices be available and logically
 assembled into the filesystem before it can be mounted.
 
 When you boot on sdb, naturally, the sdb component of the
 multi-device filesystem that the kernel finds, so it's the one
 listed, even tho the filesystem is actually composed of more devices,
 not just that one.

I am not following you: it seems to be the opposite of what you
describe. If I boot on sdb, I expect sdb1 and sdb2 to be the first
components that the kernel find. However I can see that sda1 and sda2
are used (using the 'mount' command).

 When you switch the cables, the first one is, at
 least on your system, always the first device component of the
 filesystem detected, so it's always the one occupying the single
 device slot available for display, even tho the filesystem has
 actually assembled all devices into the complete filesystem before
 mounting.
 

Normally the 2 hard drive should be exactly the same (or I didn't
understand something) except for the UUID_SUB.
That's why I don't understand if I switch the cable, I should get
exactly the same results with 'mount'.
But that is not the case, the 'mount' command always point to the same
partition:
- without cable switch: sda1 and sda2
- with cable switch: sdb1 and sdb2
Everything happen as if the system is using the UUID_SUB to get his
'favorite' partition.

  If I disconnect /dev/sda, the system doesn't boot with a message
  saying that it hasn't found the UUID:
  
  Scanning for BTRFS filesystems...
  mount:
  mounting /dev/disk/by-uuid/c64fca2a-5700-4cca-abac-3a61f2f7486c
  on /root failed: Invalid argument
  
  Can you tell me what I have done incorrectly ?
  Is it because of UEFI ? If yes I haven't understood how I can
  correct 

Re: BTRFS with RAID1 cannot boot when removing drive

2014-02-10 Thread Saint Germain
Hello !

On Mon, 10 Feb 2014 19:18:22 -0700, Chris Murphy
li...@colorremedies.com wrote :

 
 On Feb 9, 2014, at 2:40 PM, Saint Germain saint...@gmail.com wrote:
  
  Then I added another drive for a RAID1 configuration (with btrfs
  balance) and I installed grub on the second hard drive with
  grub-install /dev/sdb.
 
 That can't work on UEFI. UEFI firmware effectively requires a GPT
 partition map and something to serve as an EFI System partition on
 all bootable drives.
 
 Second there's a difference between UEFI with and without secure boot.
 
 With secure boot you need to copy the files your distro installer
 puts on the target drive EFI System partition to each addition
 drive's ESP if you want multibooting to work in case of disk failure.
 The grub on each ESP likely looks on only its own ESP for a grub.cfg.
 So that then means having to sync grub.cfg's among each disk used for
 booting. A way around this is to create a single grub.cfg that merely
 forwards to the true grub.cfg. And you can copy this forward-only
 grub.cfg to each ESP. That way the ESP's never need updating or
 syncing again.
 
 Without secure boot, you must umount /boot/efi and mount the ESP for
 each bootable disk is turn, and then merely run:
 
 grub-install
 
 That will cause a core.img to be created for that particular ESP, and
 it will point to the usual grub.cfg location at /boot/grub.
 

Ok I need to really understand how my motherboard works (new Z87E-ITX).
It is written 64Mb AMI UEFI Legal BIOS, so I thought it was really
UEFI.

 
  
  If I boot on sdb, it takes sda1 as the root filesystem
  If I switched the cable, it always take the first hard drive as the
  root filesystem (now sdb)
  If I disconnect /dev/sda, the system doesn't boot with a message
  saying that it hasn't found the UUID:
  
  Scanning for BTRFS filesystems...
  mount:
  mounting /dev/disk/by-uuid/c64fca2a-5700-4cca-abac-3a61f2f7486c
  on /root failed: Invalid argument
 
 Well if /dev/sda is missing, and you have an unpartitioned /dev/sdb I
 don't even know how you're getting this far, and it seems like the
 UEFI computer might actually be booting in CSM-BIOS mode which
 presents a conventional BIOS to the operating system. Disintguishing
 such things gets messy quickly.
 

/dev/sdb has the same partition as /dev/sda.
Duncan gave me the hint with degraded mode and I managed to boot
(however I had some problem with mounting sda2).

  
  Can you tell me what I have done incorrectly ?
  Is it because of UEFI ? If yes I haven't understood how I can
  correct it in a simple way.
  
  As extra question, I don't see also how I can configure the system
  to get the correct swap in case of disk failure. Should I force
  both swap partition to have the same UUID ?
 
 If you're really expecting to create a system that can accept a disk
 failure and continue to work, I don't see how it can depend on swap
 partitions. It's fine to create them, but just realize if they're
 actually being used and the underlying physical device dies, the
 kernel isn't going to like it.
 
 A possible work around is using an md raid1 partition as swap.
 

I understand. Normally the swap will only be used for hibernating. I
don't expect to use it except perhaps in some extreme case.

Thanks for your help !
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: BTRFS with RAID1 cannot boot when removing drive

2014-02-10 Thread Duncan
Saint Germain posted on Tue, 11 Feb 2014 04:15:27 +0100 as excerpted:

 Ok I need to really understand how my motherboard works (new Z87E-ITX).
 It is written 64Mb AMI UEFI Legal BIOS, so I thought it was really
 UEFI.

I expect it's truly UEFI.  But from what I've read most UEFI based 
firmware(possibly all in theory, with the caveat that there's bugs and 
some might not actually work as intended due to bugs) on x86/amd64 (as 
opposed to arm) has a legacy-BIOS mode fallback.  Provided it's not in 
secure-boot mode, if the storage devices it is presented don't have a 
valid UEFI config, it'll fall back to legacy-BIOS mode and try to detect 
and boot that.

Which may or may not be what your system is actually doing.  As I said, 
since I've not actually experimented with UEFI here, my practical 
knowledge on it is virtually nil, and I don't claim to have studied the 
theory well enough to deduce in that level of detail what your system is 
doing.  But I know that's how it's /supposed/ to be able to work. =:^)

(FWIW, what I /have/ done, deliberately, is read enough about UEFI to 
have a general feel for it, and to have been previously exposed to the 
ideas for some time, so that once I /do/ have it available and decide 
it's time, I'll be able to come up to speed relatively quickly as I've 
had the general ideas turning over in my head for quite some time 
already, so in effect I'll simply be reviewing the theory and doing the 
lab work, while concurrently making logical connections about how it all 
fits together that only happen once one actually does that lab work.  
I've discovered over the years that this is perhaps my most effective way 
to learn, read about the general principles while not really 
understanding it the first time thru, then come back to it some months or 
years later, and I pick it up real fast, because my subconscious has been 
working on the problem the whole time! Come to think of it, that's 
actually how I handled btrfs, too, trying it at one point and deciding it 
didn't fit my needs at the time, leaving it for awhile, then coming back 
to it later when my needs had changed, but I already had an idea what I 
was doing from the previous try, with the result being I really took to 
it fast, the second time!  =:^)

-- 
Duncan - List replies preferred.   No HTML msgs.
Every nonfree program has a lord, a master --
and if you use the program, he is your master.  Richard Stallman

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: BTRFS with RAID1 cannot boot when removing drive

2014-02-10 Thread Duncan
Saint Germain posted on Tue, 11 Feb 2014 04:15:27 +0100 as excerpted:

 I understand. Normally the swap will only be used for hibernating. I
 don't expect to use it except perhaps in some extreme case.

If hibernate is your main swap usage, you might consider the noauto fstab 
option as well, then specifically swapon the appropriate one in your 
hibernate script since you may well need logic in there to figure out 
which one to use in any case.  I was doing that for awhile.

(I've run my own suspend/hibernate scripts based on the documentation in 
$KERNDIR/Documentation/power/*, for years.  The kernel's docs dir really 
is a great resource for a lot of sysadmin level stuff as well as the 
expected kernel developer stuff.  I think few are aware of just how much 
real useful admin-level information it actually contains. =:^)

-- 
Duncan - List replies preferred.   No HTML msgs.
Every nonfree program has a lord, a master --
and if you use the program, he is your master.  Richard Stallman

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


BTRFS with RAID1 cannot boot when removing drive

2014-02-09 Thread Saint Germain
Hello,

I am experimenting with BTRFS and RAID1 on my Debian Wheezy (with
backported kernel 3.12-0.bpo.1-amd64) using a a motherboard with UEFI.

However I haven't managed to make the system boot when the removing the
first hard drive.

I have installed Debian with the following partition on the first hard
drive (no BTRFS subsystem):
/dev/sda1: for / (BTRFS)
/dev/sda2: for /home (BTRFS)
/dev/sda3: for swap

Then I added another drive for a RAID1 configuration (with btrfs
balance) and I installed grub on the second hard drive with
grub-install /dev/sdb.

If I boot on sdb, it takes sda1 as the root filesystem
If I switched the cable, it always take the first hard drive as the
root filesystem (now sdb)
If I disconnect /dev/sda, the system doesn't boot with a message
saying that it hasn't found the UUID:

Scanning for BTRFS filesystems...
mount: mounting /dev/disk/by-uuid/c64fca2a-5700-4cca-abac-3a61f2f7486c on /root 
failed: Invalid argument

Can you tell me what I have done incorrectly ?
Is it because of UEFI ? If yes I haven't understood how I can correct
it in a simple way.

As extra question, I don't see also how I can configure the system to
get the correct swap in case of disk failure. Should I force both swap partition
to have the same UUID ?

Many thanks in advance !


Here are some outputs for info:

btrfs filesystem show
Label: none  uuid: 743d6b3b-71a7-4869-a0af-83549555284b
Total devices 2 FS bytes used 27.96MB
devid1 size 897.98GB used 3.03GB path /dev/sda2
devid2 size 897.98GB used 3.03GB path /dev/sdb2

Label: none  uuid: c64fca2a-5700-4cca-abac-3a61f2f7486c
Total devices 2 FS bytes used 3.85GB
devid1 size 27.94GB used 7.03GB path /dev/sda1
devid2 size 27.94GB used 7.03GB path /dev/sdb1

blkid 
/dev/sda1: UUID=c64fca2a-5700-4cca-abac-3a61f2f7486c 
UUID_SUB=77ffad34-681c-4c43-9143-9b73da7d1ae3 TYPE=btrfs 
/dev/sda3: UUID=469715b2-2fa3-4462-b6f5-62c04a60a4a2 TYPE=swap 
/dev/sda2: UUID=743d6b3b-71a7-4869-a0af-83549555284b 
UUID_SUB=744510f5-5bd5-4df4-b8c4-0fc1a853199a TYPE=btrfs 
/dev/sdb1: UUID=c64fca2a-5700-4cca-abac-3a61f2f7486c 
UUID_SUB=2615fd98-f2ad-4e7b-84bc-0ee7f9770ca0 TYPE=btrfs 
/dev/sdb2: UUID=743d6b3b-71a7-4869-a0af-83549555284b 
UUID_SUB=8783a7b1-57ef-4bcc-ae7f-be20761e9a19 TYPE=btrfs 
/dev/sdb3: UUID=56fbbe2f-7048-488f-b263-ab2eb000d1e1 TYPE=swap

cat /etc/fstab
# file system mount point   type  options   dump  pass
UUID=c64fca2a-5700-4cca-abac-3a61f2f7486c /   btrfs   defaults  
  0   1
UUID=743d6b3b-71a7-4869-a0af-83549555284b /home   btrfs   defaults  
  0   2
UUID=469715b2-2fa3-4462-b6f5-62c04a60a4a2 noneswapsw
  0   0

cat /boot/grub/grub.cfg 
#
# DO NOT EDIT THIS FILE
#
# It is automatically generated by grub-mkconfig using templates
# from /etc/grub.d and settings from /etc/default/grub
#

### BEGIN /etc/grub.d/00_header ###
if [ -s $prefix/grubenv ]; then
  load_env
fi
set default=0
if [ ${prev_saved_entry} ]; then
  set saved_entry=${prev_saved_entry}
  save_env saved_entry
  set prev_saved_entry=
  save_env prev_saved_entry
  set boot_once=true
fi

function savedefault {
  if [ -z ${boot_once} ]; then
saved_entry=${chosen}
save_env saved_entry
  fi
}

function load_video {
  insmod vbe
  insmod vga
  insmod video_bochs
  insmod video_cirrus
}

insmod part_msdos
insmod btrfs
set root='(hd1,msdos1)'
search --no-floppy --fs-uuid --set=root c64fca2a-5700-4cca-abac-3a61f2f7486c
if loadfont /usr/share/grub/unicode.pf2 ; then
  set gfxmode=640x480
  load_video
  insmod gfxterm
  insmod part_msdos
  insmod btrfs
  set root='(hd1,msdos1)'
  search --no-floppy --fs-uuid --set=root c64fca2a-5700-4cca-abac-3a61f2f7486c
  set locale_dir=($root)/boot/grub/locale
  set lang=fr_FR
  insmod gettext
fi
terminal_output gfxterm
set timeout=5
### END /etc/grub.d/00_header ###

### BEGIN /etc/grub.d/05_debian_theme ###
insmod part_msdos
insmod btrfs
set root='(hd1,msdos1)'
search --no-floppy --fs-uuid --set=root c64fca2a-5700-4cca-abac-3a61f2f7486c
insmod png
if background_image /usr/share/images/desktop-base/joy-grub.png; then
  set color_normal=white/black
  set color_highlight=black/white
else
  set menu_color_normal=cyan/blue
  set menu_color_highlight=white/blue
fi
### END /etc/grub.d/05_debian_theme ###

### BEGIN /etc/grub.d/10_linux ###
menuentry 'Debian GNU/Linux, with Linux 3.12-0.bpo.1-amd64' --class debian 
--class gnu-linux --class gnu --class os {
load_video
insmod gzio
insmod part_msdos
insmod btrfs
set root='(hd1,msdos1)'
search --no-floppy --fs-uuid --set=root 
c64fca2a-5700-4cca-abac-3a61f2f7486c
echo'Chargement de Linux 3.12-0.bpo.1-amd64 ...'
linux   /boot/vmlinuz-3.12-0.bpo.1-amd64 
root=UUID=c64fca2a-5700-4cca-abac-3a61f2f7486c ro  quiet
echo'Chargement du disque mémoire initial ...'
initrd  

Re: BTRFS with RAID1 cannot boot when removing drive

2014-02-09 Thread Duncan
Saint Germain posted on Sun, 09 Feb 2014 22:40:55 +0100 as excerpted:

 I am experimenting with BTRFS and RAID1 on my Debian Wheezy (with
 backported kernel 3.12-0.bpo.1-amd64) using a a motherboard with UEFI.

My systems don't do UEFI, but I do run GPT partitions and use grub2 for 
booting, with grub2-core installed to a BIOS/reserved type partition 
(instead of as an EFI service as it would be with UEFI).  And I have root 
filesystem btrfs two-device raid1 mode working fine here, tested bootable 
with only one device of the two available.

So while I can't help you directly with UEFI, I know the rest of it can/
does work.

One more thing:  I do have a (small) separate btrfs /boot, actually two 
of them as I setup a separate /boot on each of the two devices in ordered 
to have a backup /boot, since grub can only point to one /boot by 
default, and while pointing to another in grub's rescue mode is possible, 
I didn't want to have to deal with that if the first /boot was corrupted, 
as it's easier to simply point the BIOS at a different drive entirely and 
load its (independently installed and configured) grub and /boot.

But grub2's btrfs module reads raid1 mode just fine as I can access files 
on the btrfs raid1 mode rootfs directly from grub without issue, so 
that's not a problem.

But I strongly suspect I know what is... and it's a relatively easy fix.  
See below.  =:^)

 However I haven't managed to make the system boot when the removing the
 first hard drive.
 
 I have installed Debian with the following partition on the first hard
 drive (no BTRFS subsystem):
 /dev/sda1: for / (BTRFS)
 /dev/sda2: for /home (BTRFS)
 /dev/sda3: for swap
 
 Then I added another drive for a RAID1 configuration (with btrfs
 balance) and I installed grub on the second hard drive with
 grub-install /dev/sdb.

Just for clarification as you don't mention it specifically, altho your 
btrfs filesystem show information suggests you did it this way, are your 
partition layouts identical on both drives?

That's what I've done here, and I definitely find that easiest to manage 
and even just to think about, tho it's definitely not a requirement.  But 
using different partition layouts does significantly increase management 
complexity, so it's useful to avoid if possible. =:^)

 If I boot on sdb, it takes sda1 as the root filesystem

 If I switched the cable, it always take the first hard drive as
 the root filesystem (now sdb)

That's normal /appearance/, but that /appearance/ doesn't fully reflect 
reality.

The problem is that mount output (and /proc/self/mounts), fstab, etc, 
were designed with single-device filesystems in mind, and multi-device 
btrfs has to be made to fix the existing rules as best it can.

So what's actually happening is that the for a btrfs composed of multiple 
devices, since there's only one device slot for the kernel to list 
devices, it only displays the first one it happens to come across, even 
tho the filesystem will normally (unless degraded) require that all 
component devices be available and logically assembled into the 
filesystem before it can be mounted.

When you boot on sdb, naturally, the sdb component of the multi-device 
filesystem that the kernel finds, so it's the one listed, even tho the 
filesystem is actually composed of more devices, not just that one.  When 
you switch the cables, the first one is, at least on your system, always 
the first device component of the filesystem detected, so it's always the 
one occupying the single device slot available for display, even tho the 
filesystem has actually assembled all devices into the complete 
filesystem before mounting.

 If I disconnect /dev/sda, the system doesn't boot with a message saying
 that it hasn't found the UUID:
 
 Scanning for BTRFS filesystems...
 mount: mounting /dev/disk/by-uuid/c64fca2a-5700-4cca-abac-3a61f2f7486c
 on /root failed: Invalid argument
 
 Can you tell me what I have done incorrectly ?
 Is it because of UEFI ? If yes I haven't understood how I can correct it
 in a simple way.

As you haven't mentioned it and the grub config below doesn't mention it 
either, I'm almost certain that you're simply not aware of the degraded 
mount option, and when/how it should be used.

And if you're not aware of that, chances are you're not aware of the 
btrfs wiki, and the multitude of other very helpful information it has 
available.  I'd suggest you spend some time reading it, as it'll very 
likely save you quite some btrfs administration questions and headaches 
down the road, as you continue to work with btrfs.

Bookmark it and refer to it often! =:^)

https://btrfs.wiki.kernel.org

(Click on the guides and usage information in contents under section 5, 
documentation.)

Here's the mount options page.  Note that the kernel btrfs documentation 
also includes mount options:

https://btrfs.wiki.kernel.org/index.php/Mount_options

$KERNELDIR/Documentation/filesystems/btrfs.txt

You should be able to mount a two-device