Re: How to Boot with LVM

2015-09-13 Thread ray
I have found that I can copy an existing entry in EFI, EFI/debian for example, 
past it in parallel with a new name, EFI/test, update-grub and when I reboot, I 
have a new choice to the same debian instance.

This suggests that I can now delete the debian instance, update grub, only have 
on instance to boot into the same instance.  Then I should be able to install 
debian vai the installation disk.

Here comes and experiment.

I would like to than you all for your persistence.  I will inform the everyone 
of my findings.



Re: How to Boot with LVM

2015-09-11 Thread ray
On Thursday, September 10, 2015 at 11:10:05 AM UTC-5, David Wright wrote:
> Quoting ray :
> > On Wednesday, September 9, 2015 at 2:10:04 PM UTC-5, Pascal Hambourg wrote:
> > > ray a écrit :
> [...]
> > > > A baffling point:  In rEFInd the path is 
> > > > /boot/efi/EFI/debian/grubx64.efi
> > > 
> > > How is it baffling ? The EFI system partition is mounted on /boot/efi
> > > and the path relative to the partition filesystem root is
> > > /EFI/debian/grubx64.efi. The EFI firmware does not care about where you
> > > mount the EFI system partition.
> > Baffling:  Viewing with rEFInd, I see /boot/efi/EFI/debian/grubx64.efi
> 
> [...]
> 
> > > What's mounted on /boot/efi ?
> > I am not sure what it means 'what's mounted on ...'.
> 
> Yes, it does seem to me that you don't understand the concept of
> mounting.
> 
> You have a partition containing a filesystem which contains the sole
> pathname EFI/debian/grubx64.efi as shown by
> https://lists.debian.org/debian-user/2015/09/msg00273.html
> listed under Fs8. EFI is the topmost directory, and that name is
> probably obligatory. Within it can be any number of different
> directories for different OSes that you might be booting, in this case
> just debian. Within EFI/debian/ is grub's own file.
> 
> You can mount a filesystem (in a partition) onto any existing
> directory, and you've chosen to mount on /boot/efi. Normally,
> mountpoints are empty directories like this, set up for that purpose,
> but in general they don't have to be. (However, if you mount a
> filesystem onto a non-empty directory, any previous contents will he
> hidden until you unmount the filesystem again.)
> 
> When you mount your filesystem onto your mountpoint, the files in
> the filesystem will now appear under the mountpoint-directory.
> Thus you get:
> 
> mountpoint
> ↓
> /boot/efi/EFI/debian/grubx64.efi
>   ↑↑
>   filesystem paths
> 
> There's a couple of diagrams of this at
> http://www.linuxchix.org/content/courses/filesystem/Lesson1.html
> where /boot/efi ↔ /floppy and EFI/debian/grubx64.efi ↔ fun/Lesson1.html
> 
> Cheers,
> David.

I have read the Lesson1 before and did not catch on.  It took rereading these 
messages and exercising the system to get a little grip on this.  Your 
perspective has been significantly different from the Lesson1 and others I have 
read and thus yours has been very helpful.



Re: How to Boot with LVM

2015-09-11 Thread ray
On Thursday, September 10, 2015 at 10:00:06 AM UTC-5, Pascal Hambourg wrote:
> ray a écrit :
> > I have only been able to boot the HDD instance.  When I navigate to
> > the SSD instance, nothing is there.
> 
> Sorry, I should have mentionned that I never used rEFInd (fortunately
> never needed it) and don't know how it works and what it looks like.
> Could you describe what it displays step by step ?
> 
> >> /dev/sdf is one of the SSD used for RAID 0 and LVM, right ?
> >
> > /dev/sdf is a HDD, no md or LVM.
> 
> I was confused because you wrote in a previous post :
> 
> > sda, sdb 32GB + 32GB, RAID0 - md0, LVM, GParted shows 1MB reserved, 1 GB 
> > (EFI)
> > sdc, sdd 64GB + 64GB, RAID0- md1, md127, LVM, GParted shows 1MB reserved, 1 
> > GB (EFI)
> > sde, sdf 120GB + 120GB, RAID0- md0, md126 LVM, GParted shows 1MB reserved, 
> > 1 GB (EFI)
> > sdg, sdh are 2 and 4 GB HD, sdg currently hosts debian8.+q++q
> 
> So it looks like some device names changed.
> 
> >>> root@mc:/boot/efi/EFI# grub-install /dev/sdf
> >>> Installing for x86_64-efi platform.
> >>> Installation finished. No error reported.
> >>
> >> The device name is not used by grub-install with an EFI target.
> >> You could have tried to use the option --boot-loader-id I mentioned in
> >> a previous post.
> >
> > Which device name is not used by grub-install?
> 
> Whatever you type as the device name in the command line, /dev/sdf  here.
> 
> > I did not find a way to use --boot-loader-id.  I googled this exact
> phrase and did not find anything but this posting.  How do I use it?
> 
> > I did not find a way to use --boot-loader-id.  I googled this exact
> > phrase and did not find anything but this posting.  How do I use it?
> 
> It is describonned in grub-install manpage. Just type "man grub-install"
> in the command line to read it.
This is one place I fell down, my instance of grub-install did not have that 
commend.

> 
> >>> root@mc:/boot/efi/EFI# file /boot/efi/EFI/debian/grubx64.efi
> >>> /boot/efi/EFI/debian/grubx64.efi: PE32+ executable (EFI application) 
> >>> x86-64 (stripped to external PDB), for MS Windows
> >>> root@mc:/boot/efi/EFI# efibootmgr --verbose | grep debian
> >>> Boot* debian
> >>> HD(1,GPT,87471e98-b814-4aa9-b2bc-ea4669c75565,0x800,0x10)/File(\EFI\debian\grubx64.efi)
> >> Looks as expected. You can check with blkid which partition has
> >> PARTUUID=87471e98-b814-4aa9-b2bc-ea4669c75565. If you wonder about the
> >> formard / in the boot entry pathname, that's because the UEFI uses
> >> MS-style path.
> > blkid shows PARTUUID=87471e98-b814-4aa9-b2bc-ea4669c75565 to be /dev/sdf1.
> 
> This is consistent with /dev/sdf1 being mounted on /boot/efi.
> 
> >>> A baffling point:  In rEFInd the path is /boot/efi/EFI/debian/grubx64.efi
> >> How is it baffling ? The EFI system partition is mounted on /boot/efi
> >> and the path relative to the partition filesystem root is
> >> /EFI/debian/grubx64.efi. The EFI firmware does not care about where you
> >> mount the EFI system partition.
> >
> > Baffling:  Viewing with rEFInd, I see /boot/efi/EFI/debian/grubx64.efi
> 
> What do you mean by "viewing with rEFInd" ? AFAIK, rEFInd is just a boot
> loader, and pathnames such as /boot/efi/EFI/debian/grubx64.efi are used
> only in a running system after the kernel takes over.
> 
> >>> After booting up into the HDD instance, I get:
> >> Booting how ? On its own or from rEFInd ?
> > This is after booting on its own.
> 
> Whether you boot the HDD Debian instance from rEFInd, the GRUB EFI
> installed on HDD or any other boot loader should not make any difference
> in the mounted filesystems...
> 
> >> What's mounted on /boot/efi ?
> > I am not sure what it means 'what's mounted on ...'.
> 
> If "mount" or "df" show a line with /dev/sdf1 and /boot/efi, it means
> that /dev/sdf1 is mounted on /boot/efi.
It took me many times rereading this for it to sink in.

> 
> > #mount | grep boot returns empty
> > #mount | grep efi returns efivarfs on /sys/firmware/efi/efivars (...)
> 
> Looks like nothing is mounted on /boot/efi, explaining why it looks
> empty. But we have yet to explain why nothing is mounted.
> Can you check the contents of /etc/fstab ?
> 
> > root@md:/home/rayj# df -h /boot/
> > Filesystem  Size  Used Avail Use% Mounted on
> > /dev/sdf2   1.4T  4.2G  1.3T   1% /
> 
> Irrelevant. We are interested in /boot/efi, not /boot.
> 
> > OK, a little more reading tells me /dev/sdf2 is mounted on /boot
> 
> No, it is the root filesystem, mounted on /. There is no separate /boot.

It looks like I lost my previous response to this conversation in my excitement.
After rereading your messages and Davids', I found I needed to mount /dev/sdf1 
on /boot/efi since I didn't find it there.  

The first fault was:
# grub-install /dev/sdf --target=x86_64-efi  --bootloader-id=test --recheck
Unrecognized option `--target=x86_64-efi'

I found from grub-install --help, there was no --bootloader-id=  or --recheck.
After some checking, I
#apt-get 

Re: How to Boot with LVM

2015-09-11 Thread ray
On Thursday, September 10, 2015 at 10:00:06 AM UTC-5, Pascal Hambourg wrote:
> ray a écrit :
> > I have only been able to boot the HDD instance.  When I navigate to
> > the SSD instance, nothing is there.
> 
> Sorry, I should have mentionned that I never used rEFInd (fortunately
> never needed it) and don't know how it works and what it looks like.
> Could you describe what it displays step by step ?

An iso in loaded onto a stick using Rufus on Win7 on another box.
This stick is inserted to the target box which is then booted.
The rEFInd is asks a couple interface questions such as keyboard mappings.
The one and only screen comes up.
The top row is the list of OSs it has found.  It found two (one will boot, the 
other not).
The second row provides a choice of functions to run rEFInd functions, or a 
shell.  
When I select the first OS listed, it boots.  Works well, in fact it boot twice 
as fist.  Resetting the box results in a failed boot (without the stick).
When selecting the second choice, the box just hangs.
When the REFInd is run, then the first OS is selected, it boots.  On reset, the 
box boots right up into the selected OS (still twice as fast, 15 sec. instead 
of 35).

> 
> >> /dev/sdf is one of the SSD used for RAID 0 and LVM, right ?
> >
> > /dev/sdf is a HDD, no md or LVM.
> 
> I was confused because you wrote in a previous post :
> 
> > sda, sdb 32GB + 32GB, RAID0 - md0, LVM, GParted shows 1MB reserved, 1 GB 
> > (EFI)
> > sdc, sdd 64GB + 64GB, RAID0- md1, md127, LVM, GParted shows 1MB reserved, 1 
> > GB (EFI)
> > sde, sdf 120GB + 120GB, RAID0- md0, md126 LVM, GParted shows 1MB reserved, 
> > 1 GB (EFI)
> > sdg, sdh are 2 and 4 GB HD, sdg currently hosts debian8.+q++q
> 
> So it looks like some device names changed.
Yes.  I removed a pair of SDDs for testing and when I put them back in, the 
order had changed.

> 
> >>> root@mc:/boot/efi/EFI# grub-install /dev/sdf
> >>> Installing for x86_64-efi platform.
> >>> Installation finished. No error reported.
> >>
> >> The device name is not used by grub-install with an EFI target.
> >> You could have tried to use the option --boot-loader-id I mentioned in
> >> a previous post.
> >
> > Which device name is not used by grub-install?
> 
> Whatever you type as the device name in the command line, /dev/sdf  here.
> 
> > I did not find a way to use --boot-loader-id.  I googled this exact
> > phrase and did not find anything but this posting.  How do I use it?
> 
> It is describonned in grub-install manpage. Just type "man grub-install"
> in the command line to read it.
I found it as "--bootloader-id=".  

> 
> >>> root@mc:/boot/efi/EFI# file /boot/efi/EFI/debian/grubx64.efi
> >>> /boot/efi/EFI/debian/grubx64.efi: PE32+ executable (EFI application) 
> >>> x86-64 (stripped to external PDB), for MS Windows
> >>> root@mc:/boot/efi/EFI# efibootmgr --verbose | grep debian
> >>> Boot* debian
> >>> HD(1,GPT,87471e98-b814-4aa9-b2bc-ea4669c75565,0x800,0x10)/File(\EFI\debian\grubx64.efi)
> >> Looks as expected. You can check with blkid which partition has
> >> PARTUUID=87471e98-b814-4aa9-b2bc-ea4669c75565. If you wonder about the
> >> formard / in the boot entry pathname, that's because the UEFI uses
> >> MS-style path.
> > blkid shows PARTUUID=87471e98-b814-4aa9-b2bc-ea4669c75565 to be /dev/sdf1.
> 
> This is consistent with /dev/sdf1 being mounted on /boot/efi.
> 
> >>> A baffling point:  In rEFInd the path is /boot/efi/EFI/debian/grubx64.efi
> >> How is it baffling ? The EFI system partition is mounted on /boot/efi
> >> and the path relative to the partition filesystem root is
> >> /EFI/debian/grubx64.efi. The EFI firmware does not care about where you
> >> mount the EFI system partition.
> >
> > Baffling:  Viewing with rEFInd, I see /boot/efi/EFI/debian/grubx64.efi
> 
> What do you mean by "viewing with rEFInd" ? AFAIK, rEFInd is just a boot
> loader, and pathnames such as /boot/efi/EFI/debian/grubx64.efi are used
> only in a running system after the kernel takes over.
While in the rEFInd shell and ls -l in /boot, I see efi/EFI/debian/grubx64.
When in the Debian instance, ls -l in /boot only shows nothing in efi, where 
rEFInd shows /EFI/debian/grubx64 in there.

This is where my understanding of mounted get lost.  /dev/sdf1 is mounted at 
/boot/efi so sdf1 must have EFI/debian/grubx64.  But Debian doesn't see it.  

I think this is related to my problem of using:
grub-install --bootloader-id=testcase /dev/sdf
so I also tried:
grub-install --bootloader-id=testcase /boot
and 
grub-install --bootloader-id=testcase /boot/efi

But I get back a message:
Installing for x86_64-efi platform.
grub-install: error: cannot find EFI directory.

OK, rereading your message and David's many times, I mounted /dev/sdf1 to 
/boot/efi and then ran grub-install and it worked.

I then ran update-grub.  Entries, I then got a new EFI folder with additional 
entries - Debian and test.



> 
> >>> After booting up into the HDD instance, I get:
> >> Booting how ? On its 

Re: How to Boot with LVM

2015-09-10 Thread Pascal Hambourg
ray a écrit :
> I have only been able to boot the HDD instance.  When I navigate to
> the SSD instance, nothing is there.

Sorry, I should have mentionned that I never used rEFInd (fortunately
never needed it) and don't know how it works and what it looks like.
Could you describe what it displays step by step ?

>> /dev/sdf is one of the SSD used for RAID 0 and LVM, right ?
>
> /dev/sdf is a HDD, no md or LVM.

I was confused because you wrote in a previous post :

> sda, sdb 32GB + 32GB, RAID0 - md0, LVM, GParted shows 1MB reserved, 1 GB (EFI)
> sdc, sdd 64GB + 64GB, RAID0- md1, md127, LVM, GParted shows 1MB reserved, 1 
> GB (EFI)
> sde, sdf 120GB + 120GB, RAID0- md0, md126 LVM, GParted shows 1MB reserved, 1 
> GB (EFI)
> sdg, sdh are 2 and 4 GB HD, sdg currently hosts debian8.+q++q

So it looks like some device names changed.

>>> root@mc:/boot/efi/EFI# grub-install /dev/sdf
>>> Installing for x86_64-efi platform.
>>> Installation finished. No error reported.
>>
>> The device name is not used by grub-install with an EFI target.
>> You could have tried to use the option --boot-loader-id I mentioned in
>> a previous post.
>
> Which device name is not used by grub-install?

Whatever you type as the device name in the command line, /dev/sdf  here.

> I did not find a way to use --boot-loader-id.  I googled this exact
phrase and did not find anything but this posting.  How do I use it?

> I did not find a way to use --boot-loader-id.  I googled this exact
> phrase and did not find anything but this posting.  How do I use it?

It is describonned in grub-install manpage. Just type "man grub-install"
in the command line to read it.

>>> root@mc:/boot/efi/EFI# file /boot/efi/EFI/debian/grubx64.efi
>>> /boot/efi/EFI/debian/grubx64.efi: PE32+ executable (EFI application) x86-64 
>>> (stripped to external PDB), for MS Windows
>>> root@mc:/boot/efi/EFI# efibootmgr --verbose | grep debian
>>> Boot* debian
>>> HD(1,GPT,87471e98-b814-4aa9-b2bc-ea4669c75565,0x800,0x10)/File(\EFI\debian\grubx64.efi)
>> Looks as expected. You can check with blkid which partition has
>> PARTUUID=87471e98-b814-4aa9-b2bc-ea4669c75565. If you wonder about the
>> formard / in the boot entry pathname, that's because the UEFI uses
>> MS-style path.
> blkid shows PARTUUID=87471e98-b814-4aa9-b2bc-ea4669c75565 to be /dev/sdf1.

This is consistent with /dev/sdf1 being mounted on /boot/efi.

>>> A baffling point:  In rEFInd the path is /boot/efi/EFI/debian/grubx64.efi
>> How is it baffling ? The EFI system partition is mounted on /boot/efi
>> and the path relative to the partition filesystem root is
>> /EFI/debian/grubx64.efi. The EFI firmware does not care about where you
>> mount the EFI system partition.
>
> Baffling:  Viewing with rEFInd, I see /boot/efi/EFI/debian/grubx64.efi

What do you mean by "viewing with rEFInd" ? AFAIK, rEFInd is just a boot
loader, and pathnames such as /boot/efi/EFI/debian/grubx64.efi are used
only in a running system after the kernel takes over.

>>> After booting up into the HDD instance, I get:
>> Booting how ? On its own or from rEFInd ?
> This is after booting on its own.

Whether you boot the HDD Debian instance from rEFInd, the GRUB EFI
installed on HDD or any other boot loader should not make any difference
in the mounted filesystems...

>> What's mounted on /boot/efi ?
> I am not sure what it means 'what's mounted on ...'.

If "mount" or "df" show a line with /dev/sdf1 and /boot/efi, it means
that /dev/sdf1 is mounted on /boot/efi.

> #mount | grep boot returns empty
> #mount | grep efi returns efivarfs on /sys/firmware/efi/efivars (...)

Looks like nothing is mounted on /boot/efi, explaining why it looks
empty. But we have yet to explain why nothing is mounted.
Can you check the contents of /etc/fstab ?

> root@md:/home/rayj# df -h /boot/
> Filesystem  Size  Used Avail Use% Mounted on
> /dev/sdf2   1.4T  4.2G  1.3T   1% /

Irrelevant. We are interested in /boot/efi, not /boot.

> OK, a little more reading tells me /dev/sdf2 is mounted on /boot

No, it is the root filesystem, mounted on /. There is no separate /boot.



Re: How to Boot with LVM

2015-09-10 Thread David Wright
Quoting ray (r...@aarden.us):
> On Wednesday, September 9, 2015 at 2:10:04 PM UTC-5, Pascal Hambourg wrote:
> > ray a écrit :
[...]
> > > A baffling point:  In rEFInd the path is /boot/efi/EFI/debian/grubx64.efi
> > 
> > How is it baffling ? The EFI system partition is mounted on /boot/efi
> > and the path relative to the partition filesystem root is
> > /EFI/debian/grubx64.efi. The EFI firmware does not care about where you
> > mount the EFI system partition.
> Baffling:  Viewing with rEFInd, I see /boot/efi/EFI/debian/grubx64.efi

[...]

> > What's mounted on /boot/efi ?
> I am not sure what it means 'what's mounted on ...'.

Yes, it does seem to me that you don't understand the concept of
mounting.

You have a partition containing a filesystem which contains the sole
pathname EFI/debian/grubx64.efi as shown by
https://lists.debian.org/debian-user/2015/09/msg00273.html
listed under Fs8. EFI is the topmost directory, and that name is
probably obligatory. Within it can be any number of different
directories for different OSes that you might be booting, in this case
just debian. Within EFI/debian/ is grub's own file.

You can mount a filesystem (in a partition) onto any existing
directory, and you've chosen to mount on /boot/efi. Normally,
mountpoints are empty directories like this, set up for that purpose,
but in general they don't have to be. (However, if you mount a
filesystem onto a non-empty directory, any previous contents will he
hidden until you unmount the filesystem again.)

When you mount your filesystem onto your mountpoint, the files in
the filesystem will now appear under the mountpoint-directory.
Thus you get:

mountpoint
↓
/boot/efi/EFI/debian/grubx64.efi
  ↑↑
  filesystem paths

There's a couple of diagrams of this at
http://www.linuxchix.org/content/courses/filesystem/Lesson1.html
where /boot/efi ↔ /floppy and EFI/debian/grubx64.efi ↔ fun/Lesson1.html

Cheers,
David.



Re: How to Boot with LVM

2015-09-09 Thread ray
On Wednesday, September 9, 2015 at 2:10:04 PM UTC-5, Pascal Hambourg wrote:
> ray a écrit :
> > On Tuesday, September 8, 2015 at 8:10:08 AM UTC-5, Pascal Hambourg wrote:
> > 
> >> After booting the HDD system with rEFInd, running 'grub-install' should
> >> reinstall the bootloader properly. See also useful options in my
> >> previous message.
> > 
> > Yes, it is now booting.  This is with the rEFInd stick:  > the HDD instance through the rEFInd stick>
> > root@mc:/boot/efi/EFI# mount /dev/sdf1 /boot/efi
> > mount: /dev/sdf1 is already mounted or /boot/efi busy
> >/dev/sdf1 is already mounted on /boot/efi
> 
> Which instance did you boot with rEFInd ?
I have only been able to boot the HDD instance.  When I navigate to the SSD 
instance, nothing is there.

> /dev/sdf is one of the SSD used for RAID 0 and LVM, right ?
/dev/sdf is a HDD, no md or LVM.
> 
> > root@mc:/boot/efi/EFI# apt-get install --reinstall grub-efi
> 
> Note that grub-efi is a dummy package which depends on either
> grub-efi-amd64 or grub-efi-ia32 depending on the installed architecture.
> Reinstalling it does nothing.
> 
> > root@mc:/boot/efi/EFI# grub-install /dev/sdf
> > Installing for x86_64-efi platform.
> > Installation finished. No error reported.
> 
> The device name is not used by grub-install with an EFI target.
> You could have tried to use the option --boot-loader-id I mentioned in
> a previous post.
Which device name is not used by grub-install?
I did not find a way to use --boot-loader-id.  I googled this exact phrase and 
did not find anything but this posting.  How do I use it?

> 
> > root@mc:/boot/efi/EFI# file /boot/efi/EFI/debian/grubx64.efi
> > /boot/efi/EFI/debian/grubx64.efi: PE32+ executable (EFI application) x86-64 
> > (stripped to external PDB), for MS Windows
> > root@mc:/boot/efi/EFI# efibootmgr --verbose | grep debian
> > Boot* debian
> > HD(1,GPT,87471e98-b814-4aa9-b2bc-ea4669c75565,0x800,0x10)/File(\EFI\debian\grubx64.efi)
> 
> Looks as expected. You can check with blkid which partition has
> PARTUUID=87471e98-b814-4aa9-b2bc-ea4669c75565. If you wonder about the
> formard / in the boot entry pathname, that's because the UEFI uses
> MS-style path.
blkid shows PARTUUID=87471e98-b814-4aa9-b2bc-ea4669c75565 to be /dev/sdf1.

> 
> > A baffling point:  In rEFInd the path is /boot/efi/EFI/debian/grubx64.efi
> 
> How is it baffling ? The EFI system partition is mounted on /boot/efi
> and the path relative to the partition filesystem root is
> /EFI/debian/grubx64.efi. The EFI firmware does not care about where you
> mount the EFI system partition.
Baffling:  Viewing with rEFInd, I see /boot/efi/EFI/debian/grubx64.efi

> 
> > After booting up into the HDD instance, I get:
> 
> Booting how ? On its own or from rEFInd ?
This is after booting on its own.
> 
> > root@mc:/boot# ls -a
> > .   config-4.0.0-2-amd64  grubSystem.map-4.0.0-2-amd64
> > ..  efi  initrd.img-4.0.0-2-amd64  vmlinuz-4.0.0-2-amd64
> > root@mc:/boot# cd grub
> > root@mc:/boot/grub# ls -a
> > .  ..  fonts  grub.cfggrubenv  locale  unicode.pf2  x86_64-efi
> 
> So far so good.
> 
> > root@mc:~# cd /boot
> > root@mc:/boot# cd efi
> > root@mc:/boot/efi# ls -a
> > .  ..
> > 
> > There is nothing past /boot/efi
> 
> What's mounted on /boot/efi ?
I am not sure what it means 'what's mounted on ...'.

#mount | grep boot returns empty
#mount | grep efi returns efivarfs on /sys/firmware/efi/efivars type efivarfs 
(rw,nosuid,nodev,noexec,relatime)

root@md:/home/rayj# df -h /boot/
Filesystem  Size  Used Avail Use% Mounted on
/dev/sdf2   1.4T  4.2G  1.3T   1% /

# mount | column -t  returns empty

# df /boot
Filesystem  1K-blocksUsed  Available Use% Mounted on
/dev/sdf2  1408531700 4380392 1332578984   1% /

OK, a little more reading tells me /dev/sdf2 is mounted on /boot



Re: How to Boot with LVM

2015-09-09 Thread Pascal Hambourg
ray a écrit :
> On Tuesday, September 8, 2015 at 8:10:08 AM UTC-5, Pascal Hambourg wrote:
> 
>> After booting the HDD system with rEFInd, running 'grub-install' should
>> reinstall the bootloader properly. See also useful options in my
>> previous message.
> 
> Yes, it is now booting.  This is with the rEFInd stick:
> root@mc:/boot/efi/EFI# mount /dev/sdf1 /boot/efi
> mount: /dev/sdf1 is already mounted or /boot/efi busy
>/dev/sdf1 is already mounted on /boot/efi

Which instance did you boot with rEFInd ?
/dev/sdf is one of the SSD used for RAID 0 and LVM, right ?

> root@mc:/boot/efi/EFI# apt-get install --reinstall grub-efi

Note that grub-efi is a dummy package which depends on either
grub-efi-amd64 or grub-efi-ia32 depending on the installed architecture.
Reinstalling it does nothing.

> root@mc:/boot/efi/EFI# grub-install /dev/sdf
> Installing for x86_64-efi platform.
> Installation finished. No error reported.

The device name is not used by grub-install with an EFI target.
You could have tried to use the option --boot-loader-id I mentionned in
a previous post.

> root@mc:/boot/efi/EFI# file /boot/efi/EFI/debian/grubx64.efi
> /boot/efi/EFI/debian/grubx64.efi: PE32+ executable (EFI application) x86-64 
> (stripped to external PDB), for MS Windows
> root@mc:/boot/efi/EFI# efibootmgr --verbose | grep debian
> Boot* debian
> HD(1,GPT,87471e98-b814-4aa9-b2bc-ea4669c75565,0x800,0x10)/File(\EFI\debian\grubx64.efi)

Looks as expected. You can check with blkid which partition has
PARTUUID=87471e98-b814-4aa9-b2bc-ea4669c75565. If you wonder about the
formard / in the boot entry pathname, that's because the UEFI uses
MS-style path.

> A baffling point:  In rEFInd the path is /boot/efi/EFI/debian/grubx64.efi

How is it baffling ? The EFI system partition is mounted on /boot/efi
and the path relative to the partition filesystem root is
/EFI/debian/grubx64.efi. The EFI firmware does not care about where you
mount the EFI system partition.

> After booting up into the HDD instance, I get:

Booting how ? On its own or from rEFInd ?

> root@RoxTor:/boot# ls -a
> .   config-4.0.0-2-amd64  grubSystem.map-4.0.0-2-amd64
> ..  efi  initrd.img-4.0.0-2-amd64  vmlinuz-4.0.0-2-amd64
> root@RoxTor:/boot# cd grub
> root@RoxTor:/boot/grub# ls -a
> .  ..  fonts  grub.cfggrubenv  locale  unicode.pf2  x86_64-efi

So far so good.

> root@RoxTor:~# cd /boot
> root@RoxTor:/boot# cd efi
> root@RoxTor:/boot/efi# ls -a
> .  ..
> 
> This is nothing past /boot/efi

What's mounted on /boot/efi ?



Re: How to Boot with LVM

2015-09-08 Thread Rick Thomas
Hi Ray,

I’ll try to answer your questions…

On Sep 7, 2015, at 4:36 PM, ray  wrote:

> Rick,
> 
> Thank you for responding and providing all the info.  
> 
> On Monday, September 7, 2015 at 6:20:07 AM UTC-5, Rick Thomas wrote:
>> On Sep 5, 2015, at 7:24 PM, ray wrote:
>> 
>>> I would like to configure LVMs for everything including boot.
>> 
>> Is it "just for fun" or do you have a real-world reason for wanting 
>> everything, including boot, to be on LVM?
> Lack of knowledge.  I was expecting it to be cleaner.  But it no longer looks 
> that way.
>> 
>> I'll describe my own typical setup (special purpose systems may have 
>> different setups to meet special purpose needs).  For purposes of 
>> illustration, I'll describe a system with two identical disks.  The 
>> principles should be clear as they apply to systems with larger or more 
>> varied configurations.  If you have only a single disk, you can skip the 
>> RAID parts in this and go straight to LVM.
> 
> I have 3 pairs of SSDs, each pair in a RAID0.

I would use RAID1 on each pair (3 SSDs worth of “usable” space), or RAID5 or 6 
on a larger aggregate.  E.g. RAID6 on all 6 drives (gives 4 SSDs worth of 
“usable” space), or RAID5 on 6 drives (gives 5 SSDs worth of “usable” space).  
Each of those configurations can survive a loss of one SSD (or two, in the case 
of RAID6) without data loss.

Your choice of RAID0 in pairs gives the full 6 SSDs worth of “usable” space, 
but has zero redundancy.  If that works for you, that’s great.

I’ve got enough experience (40 years) as a sysadmin to have seen users tearing 
hair over lost data (I always had backups — often tape in those days — so the 
only thing really lost was uptime, but you get the point…)

In case it’s not clear, “usable” means space left over after subtracting out 
all the redundant data in the array. 

>  
>> 
>> I configure a small (<1GB) "/boot" partition as a primary partition (e.g. 
>> /dev/sda1) on one of the disks, with the same space on the other disk 
>> unused. [1]  I make another primary partition (e.g. /dev/sd[ab]2), on each 
>> of the disks, sized to be one half of the size I want for my swap.  The rest 
>> of the space on each disk goes into a single, large, logical partition (e.g. 
>> /dev/sd[ab]5).
>> 
> This is similar to my setup.  I have the swaps on a separate RAID0.

Should work fine.  Of course you should read Pascal’s post for a different 
point of view.

> 
>> The two swap partitions I set up as a RAID0 (e.g. /dev/md0).  This will be 
>> my system swap. [2]
>> 
>> The two large logical partitions, I configure into a RAID1 (e.g. /dev/md1). 
>> [3]
>> 
> I made my large partition a primary.  Could this be problematic?

It would only be a problem if you need another primary partition for something. 
 You only get 3 primary partitions on a disk, so I like to leave one free “just 
incase”.

> 
>> I configure the RAID1 as the physical volume for a single volume group which 
>> I partition using LVM into a root that's big enough to be about 50% full 
>> when fully installed,  and /home that's as big as I think I'll need for my 
>> users.  The remainder of the VG I leave unconfigured, so I can grow into it 
>> as needs become apparent over time.
>> 
>> If I have enough RAM to make it useful, I'll put /tmp on a tmpfs.  I size it 
>> at about 50% of my swap space.  With a smaller RAM, I make a separate 
>> logical volume for /tmp.
>> 
> I would like to know more about the purpose of these criteria - please.

Putting /tmp on tmpfs is for speed.  If your /tmp usage fits in RAM (after 
allowing for apps, data, and disk buffering) you get RAM speed access to your 
temporary data.  If it overflows, then the excess goes into swap, so you’re no 
worse off than if you had /tmp on hard disk in the first place.  This can make 
large compilations (as an example) run *much* faster.  On the other hand, if 
you are tight on RAM, putting /tmp on disk doesn’t hurt, and eliminates a 
source of contention for RAM which is assumed to be a scarce resource.

The 50% figure is just a rule of thumb I picked up over the years.  There’s 
nothing magic about it.  And, as always, YMMV.  It’s highly application 
dependent.

> 
>> [1] I know there are ways to make grub work with RAID1, but it's too 
>> complicated for me to get it right.  Instead, I just make regular backups of 
>> the /boot partition.  If the disk with /boot on it develops a bad spot in an 
>> inconvenient place, I simply boot from a CD in rescue mode and restore the 
>> contents of /boot from a backup into the unused space that I reserved on the 
>> other disk.
> 
> Yes, that was my plan also.  But it was also my plan to backup the rest of 
> the system to HDD.  As such, I planned for no redundancy in drive 
> configuration and only stripe for speed.

It might make sense to create a logical volume with some of the unused space in 
the volume group called (for lack of a better name) /backup.  It’s “reliable” 
in the sense 

Re: How to Boot with LVM

2015-09-08 Thread ray
On Tuesday, September 8, 2015 at 5:40:04 AM UTC-5, Pascal Hambourg wrote:
> ray a écrit :
> > On Monday, September 7, 2015 at 3:40:05 AM UTC-5, Pascal Hambourg wrote:
> >>
> >> Did the Debian installer boot in EFI or BIOS/legacy mode ?
> >
> > The motherboard BIOS reports the Debian installation media as a UEFI USB.  
> > The installer boot screen says UEFI and it is the same media used on the 
> > harddisk.  
> 
> Then could you clarify the following parts in your first post :
> 
> > I use a USB stick to load the second Debian.
> 
> What do you mean by "to load" ? To /install/ the second Debian system on
> the SSDs or to /boot/ it once installed ?

Yes, that was a poor choice of words (to load).  I installed the secod Debian 
via the USB stick.
> 
> > I have a lVM partition for the new installation.  When I select it, the
> > installer (in manual mode) says it is not bootable and go back to setup
> > to correct.  When I go back to setup, I don't see any way to do anything
> > but select a VG, dm, sdx, or HDD.
> 
> In which part of the installer is this happenning ?
During installation, this is 'partition' in manual.
> I first thought you were talking about the boot device selection during
> the boot loader installation, but in UEFI mode the installer does not
> prompt about a boot device because there is no boot device. The
> bootloader is installed in an EFI system partition which should be
> formatted as FAT16 or FAT32 and mounted on /boot/efi (implicit if you
> select "use as EFI system partition).
> 
> 
> > I had a Debian instance on the HD that worked fine and when I
> > installed a new instance on the SSD, neither would boot.
> > 
> > So I rebuilt the HD instance, ran it to configure the SSDs, and again,
> > when installing to the SSDs, nothing will boot.
> 
> This is where things go awkward with GRUB UEFI. UEFI boot is intended to
> make multiboot easier. This is quite true with different operating
> systems (e.g. Windows + Linux, or Debian + Ubuntu). Each system is
> supposed to install its own boot loader in a separate directory in the
> EFI boot partition and register it as an EFI boot entry with a fancy
> name so that it can be selected at boot time, either implicitly using
> priorities or manually through a boot menu displayed by the firmware.
That is an eye-opener. 
> 
> However it does not work this way if you install two copies of a Debian
> system with GRUB : the latter installation will overwrite and replace
> the former boot loader with its own, because the Debian installer always
> uses the same name "debian" for the directory in the EFI system
> partition and the EFI boot entry.
> 
> If you intend to keep the hard disk containing the Debian system
> installed, you don't need to install another boot loader for the Debian
> system on the SSDs. The GRUB boot loader on the hard disk can boot
> another instance of Debian after detecting it with os-prober and
> rebuilding a new menu with update-grub to include it.
How does os-prober get initiated?  I ask this as it seems to use this method, I 
will boot up into the HDD instance, then boot into the SSD.  Can I rename the 
HDD boot loader before installing the second instance?  And if so, how?



Re: How to Boot with LVM

2015-09-08 Thread Pascal Hambourg
ray a écrit :
> On Monday, September 7, 2015 at 3:40:05 AM UTC-5, Pascal Hambourg wrote:
>>
>> Did the Debian installer boot in EFI or BIOS/legacy mode ?
>
> The motherboard BIOS reports the Debian installation media as a UEFI USB.  
> The installer boot screen says UEFI and it is the same media used on the 
> harddisk.  

Then could you clarify the following parts in your first post :

> I use a USB stick to load the second Debian.

What do you mean by "to load" ? To /install/ the second Debian system on
the SSDs or to /boot/ it once installed ?

> I have a lVM partition for the new installation.  When I select it, the
> installer (in manual mode) says it is not bootable and go back to setup
> to correct.  When I go back to setup, I don't see any way to do anything
> but select a VG, dm, sdx, or HDD.

In which part of the installer is this happenning ?
I first thought you were talking about the boot device selection during
the boot loader installation, but in UEFI mode the installer does not
prompt about a boot device because there is no boot device. The
bootloader is installed in an EFI system partition which should be
formatted as FAT16 or FAT32 and mounted on /boot/efi (implicit if you
select "use as EFI system partition).


> I had a Debian instance on the HD that worked fine and when I
> installed a new instance on the SSD, neither would boot.
> 
> So I rebuilt the HD instance, ran it to configure the SSDs, and again,
> when installing to the SSDs, nothing will boot.

This is where things go awkward with GRUB UEFI. UEFI boot is intended to
make multiboot easier. This is quite true with different operating
systems (e.g. Windows + Linux, or Debian + Ubuntu). Each system is
supposed to install its own boot loader in a separate directory in the
EFI boot partition and register it as an EFI boot entry with a fancy
name so that it can be selected at boot time, either implicitly using
priorities or manually through a boot menu displayed by the firmware.

However it does not work this way if you install two copies of a Debian
system with GRUB : the latter installation will overwrite and replace
the former boot loader with its own, because the Debian installer always
uses the same name "debian" for the directory in the EFI system
partition and the EFI boot entry.

If you intend to keep the hard disk containing the Debian system
installed, you don't need to install another boot loader for the Debian
system on the SSDs. The GRUB boot loader on the hard disk can boot
another instance of Debian after detecting it with os-prober and
rebuilding a new menu with update-grub to include it.



Re: How to Boot with LVM

2015-09-08 Thread ray
On Monday, September 7, 2015 at 8:40:05 PM UTC-5, ray wrote:
> Update:
I would like to clarify that I was able to boot the HDD instance from the 
rEFInd stick.  But, after rebooting and removing the stick, nothing will boot - 
still.



Re: How to Boot with LVM

2015-09-08 Thread Pascal Hambourg
ray a écrit :
> On Tuesday, September 8, 2015 at 5:40:04 AM UTC-5, Pascal Hambourg wrote:
>> ray a écrit :
>> 
>>> I have a lVM partition for the new installation.  When I select it, the
>>> installer (in manual mode) says it is not bootable and go back to setup
>>> to correct.  When I go back to setup, I don't see any way to do anything
>>> but select a VG, dm, sdx, or HDD.
>>
>> In which part of the installer is this happenning ?
>
> During installation, this is 'partition' in manual.

What was your action which triggered that "not bootable" error ?

> How does os-prober get initiated?

Actually, os-prober is called by default when running update-grub. But
by running it manually, you can check that it detects the other Debian
installation. You may need to activate the RAID arrays and LVs in order
to do so. Then you can run update-grub to actually add the new Debian
installation to the boot menu. You can instead create the boot entry by
hand if you have the knowledge.

> Can I rename the HDD boot loader before installing the second instance?

In recent versions of grub-install manpage such as the one included in
Jessie, I read about a new option --boot-loader-id which may allow to
give the bootloader directory and boot entry a specific name. Do not use
a name containing "debian" (case insensitive), because any boot entry
containing this word will be erased by the next Debian installation of
GRUB with default parameters. But I have not tested it yet. I used to
rename the directories or files in the EFI system partition by hand and
use efibootmgr to manage EFI boot entries. But efibootmgr is not an easy
tool if you're not comfortable with EFI booting.

Also consider the new --force-extra-removable option, which IIUC
installs a copy of the bootloader as EFI/boot/bootx64.efi in the EFI
system partition, the default path used when no boot entry works.
However if you use the same EFI system partition for a new installation
and choose "yes" when prompted to install the bootloader in the
removable media path, this will overwrite the default bootloader.



Re: How to Boot with LVM

2015-09-08 Thread Pascal Hambourg
ray a écrit :
> I would like to clarify that I was able to boot the HDD instance from
> the rEFInd stick.  But, after rebooting and removing the stick, nothing
> will boot - still.

After booting the HDD system with rEFInd, running 'grub-install' should
reinstall the bootloader properly. See also useful options in my
previous message.



Re: How to Boot with LVM

2015-09-08 Thread ray
On Tuesday, September 8, 2015 at 8:10:08 AM UTC-5, Pascal Hambourg wrote:

> After booting the HDD system with rEFInd, running 'grub-install' should
> reinstall the bootloader properly. See also useful options in my
> previous message.

Yes, it is now booting.  This is with the rEFInd stick:
root@mc:/boot/efi/EFI# mount /dev/sdf1 /boot/efi
mount: /dev/sdf1 is already mounted or /boot/efi busy
   /dev/sdf1 is already mounted on /boot/efi
root@mc:/boot/efi/EFI# apt-get install --reinstall grub-efi
Reading package lists... Done
Building dependency tree  
Reading state information... Done
The following NEW packages will be installed:
  grub-efi
0 upgraded, 1 newly installed, 0 to remove and 1 not upgraded.
Need to get 2,512 B of archives.
After this operation, 13.3 kB of additional disk space will be used.
Get:1 http://ftp.us.debian.org/debian/ stretch/main grub-efi amd64 
2.02~beta2-26 [2,512 B]
Fetched 2,512 B in 0s (10.8 kB/s)  
Selecting previously unselected package grub-efi.
(Reading database ... 150704 files and directories currently installed.)
Preparing to unpack .../grub-efi_2.02~beta2-26_amd64.deb ...
Unpacking grub-efi (2.02~beta2-26) ...
Setting up grub-efi (2.02~beta2-26) ...
root@mc:/boot/efi/EFI#
root@mc:/boot/efi/EFI# grub-install /dev/sdf
Installing for x86_64-efi platform.
Installation finished. No error reported.
root@mc:/boot/efi/EFI# file /boot/efi/EFI/debian/grubx64.efi
/boot/efi/EFI/debian/grubx64.efi: PE32+ executable (EFI application) x86-64 
(stripped to external PDB), for MS Windows
root@mc:/boot/efi/EFI# efibootmgr --verbose | grep debian
Boot* debian
HD(1,GPT,87471e98-b814-4aa9-b2bc-ea4669c75565,0x800,0x10)/File(\EFI\debian\grubx64.efi)


A baffling point:  In rEFInd the path is /boot/efi/EFI/debian/grubx64.efi

After booting up into the HDD instance, I get:

root@RoxTor:/boot# ls -a
.   config-4.0.0-2-amd64  grubSystem.map-4.0.0-2-amd64
..  efi  initrd.img-4.0.0-2-amd64  vmlinuz-4.0.0-2-amd64
root@RoxTor:/boot# cd grub
root@RoxTor:/boot/grub# ls -a
.  ..  fonts  grub.cfggrubenv  locale  unicode.pf2  x86_64-efi
root@RoxTor:~# cd /boot
root@RoxTor:/boot# cd efi
root@RoxTor:/boot/efi# ls -a
.  ..

This is nothing past /boot/efi

Any idea what is going on?



Re: How to Boot with LVM

2015-09-08 Thread ray
On Tuesday, September 8, 2015 at 3:50:04 AM UTC-5, Rick Thomas wrote:
> Hi Ray,
> 
> I'll try to answer your questions...
> 
> On Sep 7, 2015, at 4:36 PM, ray wrote:
> 
> > Rick,
> > 
> > Thank you for responding and providing all the info.  
> > 
> > On Monday, September 7, 2015 at 6:20:07 AM UTC-5, Rick Thomas wrote:
> >> On Sep 5, 2015, at 7:24 PM, ray wrote:
> >> 

> > I have 3 pairs of SSDs, each pair in a RAID0.
> 
> I would use RAID1 on each pair (3 SSDs worth of "usable" space), or RAID5 or 
> 6 on a larger aggregate.  E.g. RAID6 on all 6 drives (gives 4 SSDs worth of 
> "usable" space), or RAID5 on 6 drives (gives 5 SSDs worth of "usable" space). 
>  Each of those configurations can survive a loss of one SSD (or two, in the 
> case of RAID6) without data loss.
> 
> Your choice of RAID0 in pairs gives the full 6 SSDs worth of "usable" space, 
> but has zero redundancy.  If that works for you, that's great.
> 
> I've got enough experience (40 years) as a sysadmin to have seen users 
> tearing hair over lost data (I always had backups -- often tape in those days 
> -- so the only thing really lost was uptime, but you get the point...)
( When I bought a tape real for the IBM 360, I held onto my boxes of cards for 
4 months.  I only had to use them once when I overwrote my tape with an empty 
library.  I keep the tape for 5 years until I was sure there would be nothing 
to that could read it.  My new storage was 8" floppies.)
> 
> In case it's not clear, "usable" means space left over after subtracting out 
> all the redundant data in the array. 
> 
> >  
> >> 
> >> I configure a small (<1GB) "/boot" partition as a primary partition (e.g. 
> >> /dev/sda1) on one of the disks, with the same space on the other disk 
> >> unused. [1]  I make another primary partition (e.g. /dev/sd[ab]2), on each 
> >> of the disks, sized to be one half of the size I want for my swap.  The 
> >> rest of the space on each disk goes into a single, large, logical 
> >> partition (e.g. /dev/sd[ab]5).
Yes, I have made space on other disks for this also.  I don't know how to use 
them but it seemed the redundancy may come in handy.
> >> 
> > This is similar to my setup.  I have the swaps on a separate RAID0.
> 
> Should work fine.  Of course you should read Pascal's post for a different 
> point of view.

Yes, those consideration are ringing in my head.  It will be a trade-off 
consideration from now on.
> 
> > 
> >> The two swap partitions I set up as a RAID0 (e.g. /dev/md0).  This will be 
> >> my system swap. [2]
> >> 
> >> The two large logical partitions, I configure into a RAID1 (e.g. 
> >> /dev/md1). [3]
> >> 
> > I made my large partition a primary.  Could this be problematic?
> 
> It would only be a problem if you need another primary partition for 
> something.  You only get 3 primary partitions on a disk, so I like to leave 
> one free "just incase".
> 
> > 

> >> 
> > I would like to know more about the purpose of these criteria - please.
> 
> Putting /tmp on tmpfs is for speed.  If your /tmp usage fits in RAM (after 
> allowing for apps, data, and disk buffering) you get RAM speed access to your 
> temporary data.  If it overflows, then the excess goes into swap, so you're 
> no worse off than if you had /tmp on hard disk in the first place.  This can 
> make large compilations (as an example) run *much* faster.  On the other 
> hand, if you are tight on RAM, putting /tmp on disk doesn't hurt, and 
> eliminates a source of contention for RAM which is assumed to be a scarce 
> resource.

This sounds like an experiment I will need to exercize.

> 
> The 50% figure is just a rule of thumb I picked up over the years.  There's 
> nothing magic about it.  And, as always, YMMV.  It's highly application 
> dependent.
> 
> > 

> 
> It might make sense to create a logical volume with some of the unused space 
> in the volume group called (for lack of a better name) /backup.  It's 
> "reliable" in the sense that it resides on a RAID1/5/6 array, so it's a good 
> place to put your backups of things like /boot.  For "ultimate" backup, I 
> usually use huge (e.g. 4TB or larger) external USB disk drives.  I don't RAID 
> them, instead I have two or three and rotate amongst them using each one for 
> a week, then swapping it out for the next one.  The currently offline 
> drive(s) I keep in a fire-proof safe, preferably in a separate building from 
> the server...
> 
Thanks,
Ray



Re: How to Boot with LVM

2015-09-07 Thread Pascal Hambourg
ray a écrit :
> 
> AMD64 32G RAM
> sda, sdb 32GB + 32GB, RAID0 - md0, LVM, GParted shows 1MB reserved, 1 GB (EFI)
> sdc, sdd 64GB + 64GB, RAID0- md1, md127, LVM, GParted shows 1MB reserved, 1 
> GB (EFI)
> sde, sdf 120GB + 120GB, RAID0- md0, md126 LVM, GParted shows 1MB reserved, 1 
> GB (EFI)
^
I guess you mean md2 ?

> sdg, sdh are 2 and 4 GB HD, sdg currently hosts debian8.+q++q
  
I guess you mean TB ? Hard to find 4 GB hard disks these days.

> md0-2 form a vg for the new debian8 as dom0 for a xen instance, md127
> is a vg, md126 is a vg.  md127 & 126 are for swap files.

Why use separate VGs for swaps ? Performance issues ?

> gdisk reports that the sda and sdb have gpt partitions.

You also report that they contain an EFI system partition. This is
required for booting a disk in UEFI mode (as opposed to legacy BIOS
mode, which benefits from a BIOS boot partition instead). If the system
boots in BIOS/legacy mode instead of native UEFI mode, you should
convert the EFI system partition into a BIOS boot partition or create a
BIOS boot partition in the 1 MB reserved space.

Did the Debian installer boot in EFI or BIOS/legacy mode ?
You can see it in different ways.

In BIOS mode :
- it displays the usual well known boot screen from ISOLinux
- assisted mode using the whole GPT disk creates a BIOS boot partition
- the "BIOS boot" partition type is available in manual mode on GPT disks
- in expert mode, GRUB (grub-pc) and LILO are available as the
bootloader and the installer prompts for the boot device

In UEFI mode :
- it displays a different boot screen from GRUB
- assisted mode using the whole disk creates an EFI system partition
mounted on /boot/efi
- the "EFI system" partition type is available in manual mode
- even in expert mode, only GRUB (grub-efi) is available as the
bootloader and the installer does not prompts for the boot device.

What is the boot mode of the already installed system ?

> I have attempted again to install a new debian8 on the LVM on RAID0
> for sda I learned more about using the installer for portioning so I
> was able to get /boot on a RAID0 partition (outside of the vg).  

If the installer booted in BIOS mode, what did you select as the boot
device during GRUB installation ?

> On rebooting, nothing would boot.  The screen message said to insert
> the media to boot from.

So the firmware does not see a boot disk. Did you remove the harddisks
containing the existing Debian installation ?



Re: How to Boot with LVM

2015-09-07 Thread Rick Thomas

On Sep 5, 2015, at 7:24 PM, ray  wrote:

> I would like to configure LVMs for everything including boot.

Is it “just for fun” or do you have a real-world reason for wanting everything, 
including boot, to be on LVM?

I’ll describe my own typical setup (special purpose systems may have different 
setups to meet special purpose needs).  For purposes of illustration, I’ll 
describe a system with two identical disks.  The principles should be clear as 
they apply to systems with larger or more varied configurations.  If you have 
only a single disk, you can skip the RAID parts in this and go straight to LVM.

I configure a small (<1GB) “/boot” partition as a primary partition (e.g. 
/dev/sda1) on one of the disks, with the same space on the other disk unused. 
[1]  I make another primary partition (e.g. /dev/sd[ab]2) , on each of the 
disks, sized to be one half of the size I want for my swap.  The rest of the 
space on each disk goes into a single, large, logical partition (e.g. 
/dev/sd[ab]5).

The two swap partitions I set up as a RAID0 (e.g. /dev/md0).  This will be my 
system swap. [2]

The two large logical partitions, I configure into a RAID1 (e.g. /dev/md1). [3]

I configure the RAID1 as the physical volume for a single volume group which I 
partition using LVM into a root that’s big enough to be about 50% full when 
fully installed,  and /home that’s as big as I think I’ll need for my users.  
The remainder of the VG I leave unconfigured, so I can grow into it as needs 
become apparent over time.

If I have enough RAM to make it useful, I’ll put /tmp on a tmpfs.  I size it at 
about 50% of my swap space.  With a smaller RAM, I make a separate logical 
volume for /tmp.

[1] I know there are ways to make grub work with RAID1, but it’s too 
complicated for me to get it right.  Instead, I just make regular backups of 
the /boot partition.  If the disk with /boot on it develops a bad spot in an 
inconvenient place, I simply boot from a CD in rescue mode and restore the 
contents of /boot from a backup into the unused space that I reserved on the 
other disk.

[2] There’s no particular point in putting swap on a redundant RAID.  If your 
swap develops a bad spot, you probably want to boot from a CD into rescue mode 
ASAP so you can take necessary measures to fix the problem.  Using RAID1 for 
swap would just mask the problem — possibly until it’s too late.

[3] Conversely, everything else on the system wants to be redundantly 
protected.  If I have three or more disks, I’ll use RAID5; with four or more I 
might opt for RAID6.

Here’s an example:

rbthomas@monk:~$ lsblk
NAME MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda8:00 465.8G  0 disk  
|-sda1 8:10   953M  0 part  /boot
|-sda2 8:20  18.6G  0 part  
| `-md09:00  37.3G  0 raid0 [SWAP]
|-sda3 8:30 1K  0 part  
`-sda5 8:50 446.2G  0 part  
  `-md19:10 446.1G  0 raid1 
|-root-root  253:00  18.6G  0 lvm   /
`-root-home  253:30   210G  0 lvm   /home
sdb8:16   0 465.8G  0 disk  
|-sdb1 8:17   0   953M  0 part  
|-sdb2 8:18   0  18.6G  0 part  
| `-md09:00  37.3G  0 raid0 [SWAP]
|-sdb3 8:19   0 1K  0 part  
`-sdb5 8:21   0 446.2G  0 part  
  `-md19:10 446.1G  0 raid1 
|-root-root  253:00  18.6G  0 lvm   /
`-root-home  253:30   210G  0 lvm   /home
sr0   11:01  1024M  0 rom   


Enjoy!
Rick


Re: How to Boot with LVM

2015-09-07 Thread Pascal Hambourg
Rick Thomas a écrit :
> 
> I configure a small (<1GB) "/boot" partition as a primary partition 
> (e.g. /dev/sda1) on one of the disks, with the same space on the other 
> disk unused. [1]

(...)

> The two swap partitions I set up as a RAID0 (e.g. /dev/md0).  This will 
> be my system swap. [2]
(...)
> [1] I know there are ways to make grub work with RAID1, but it's too 
> complicated for me to get it right.

1) Set up /boot as RAID 1 (or 5, 6, 10).
2) grub install /dev/sda ; grub install /dev/sdb
How is that complicated ?

> [2] There's no particular point in putting swap on a redundant RAID.  

Yes there is : system availability. Defective swap may have the same
catastrophic effect as defective memory.
RAID is not only about data redundancy. Redundancy is used to allow the
system to tolerate a disk fault and continue to operate untill you
decide to replace the faulty disk. But maybe you don't care about your
system availability.

> If your swap develops a bad spot, you probably want to boot from a CD 
> into rescue mode ASAP so you can take necessary measures to fix the 
> problem. Using RAID1 for swap would just mask the problem - possibly
> until it's too late.

Are you serious ?
Mdadm monitoring can warn you as soon as a faulty disk is detected,
allowing you to take appropriate action while the system is still operating.
Without redundancy, a defective swap may cause the system to misbehave
or even crash at any time without any warning !

> [3] Conversely, everything else on the system wants to be redundantly 
> protected.

Why ? If you don't care about availability, just restore from the
backup. No need for RAID.



Re: How to Boot with LVM

2015-09-07 Thread ray
Rick,

Thank you for responding and providing all the info.  

On Monday, September 7, 2015 at 6:20:07 AM UTC-5, Rick Thomas wrote:
> On Sep 5, 2015, at 7:24 PM, ray wrote:
> 
> > I would like to configure LVMs for everything including boot.
> 
> Is it "just for fun" or do you have a real-world reason for wanting 
> everything, including boot, to be on LVM?
Lack of knowledge.  I was expecting it to be cleaner.  But it no longer looks 
that way.
> 
> I'll describe my own typical setup (special purpose systems may have 
> different setups to meet special purpose needs).  For purposes of 
> illustration, I'll describe a system with two identical disks.  The 
> principles should be clear as they apply to systems with larger or more 
> varied configurations.  If you have only a single disk, you can skip the RAID 
> parts in this and go straight to LVM.

I have 3 pairs of SSDs, each pair in a RAID0. 
> 
> I configure a small (<1GB) "/boot" partition as a primary partition (e.g. 
> /dev/sda1) on one of the disks, with the same space on the other disk unused. 
> [1]  I make another primary partition (e.g. /dev/sd[ab]2) , on each of the 
> disks, sized to be one half of the size I want for my swap.  The rest of the 
> space on each disk goes into a single, large, logical partition (e.g. 
> /dev/sd[ab]5).
> 
This is similar to my setup.  I have the swaps on a separate RAID0.

> The two swap partitions I set up as a RAID0 (e.g. /dev/md0).  This will be my 
> system swap. [2]
> 
> The two large logical partitions, I configure into a RAID1 (e.g. /dev/md1). 
> [3]
> 
I made my large partition a primary.  Could this be problematic?

> I configure the RAID1 as the physical volume for a single volume group which 
> I partition using LVM into a root that's big enough to be about 50% full when 
> fully installed,  and /home that's as big as I think I'll need for my users.  
> The remainder of the VG I leave unconfigured, so I can grow into it as needs 
> become apparent over time.
> 
> If I have enough RAM to make it useful, I'll put /tmp on a tmpfs.  I size it 
> at about 50% of my swap space.  With a smaller RAM, I make a separate logical 
> volume for /tmp.
> 
I would like to know more about the purpose of these criteria - please.

> [1] I know there are ways to make grub work with RAID1, but it's too 
> complicated for me to get it right.  Instead, I just make regular backups of 
> the /boot partition.  If the disk with /boot on it develops a bad spot in an 
> inconvenient place, I simply boot from a CD in rescue mode and restore the 
> contents of /boot from a backup into the unused space that I reserved on the 
> other disk.

Yes, that was my plan also.  But it was also my plan to backup the rest of the 
system to HDD.  As such, I planned for no redundancy in drive configuration and 
only stripe for speed.

> 
> [2] There's no particular point in putting swap on a redundant RAID.  If your 
> swap develops a bad spot, you probably want to boot from a CD into rescue 
> mode ASAP so you can take necessary measures to fix the problem.  Using RAID1 
> for swap would just mask the problem -- possibly until it's too late.
> 
> [3] Conversely, everything else on the system wants to be redundantly 
> protected.  If I have three or more disks, I'll use RAID5; with four or more 
> I might opt for RAID6.
> 
> Here's an example:
> 
> rbthomas@monk:~$ lsblk
> NAME MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
> sda8:00 465.8G  0 disk  
> |-sda1 8:10   953M  0 part  /boot
> |-sda2 8:20  18.6G  0 part  
> | `-md09:00  37.3G  0 raid0 [SWAP]
> |-sda3 8:30 1K  0 part  
> `-sda5 8:50 446.2G  0 part  
>   `-md19:10 446.1G  0 raid1 
> |-root-root  253:00  18.6G  0 lvm   /
> `-root-home  253:30   210G  0 lvm   /home
> sdb8:16   0 465.8G  0 disk  
> |-sdb1 8:17   0   953M  0 part  
> |-sdb2 8:18   0  18.6G  0 part  
> | `-md09:00  37.3G  0 raid0 [SWAP]
> |-sdb3 8:19   0 1K  0 part  
> `-sdb5 8:21   0 446.2G  0 part  
>   `-md19:10 446.1G  0 raid1 
> |-root-root  253:00  18.6G  0 lvm   /
> `-root-home  253:30   210G  0 lvm   /home
> sr0   11:01  1024M  0 rom   
> 
> 
> Enjoy!
> Rick



Re: How to Boot with LVM

2015-09-07 Thread ray
Update:

I booted up a rEFInd stick.  Two boots were found, the first one was the HDD 
Debian instance and it booted.  The second would not boot so I could not 
identify it.

I opened the EFI shell and found:
Fs0: no content
Fs1: no content
Fs2: no content
Fs3:  rEFInd stick
Fs4 - 7: no content
Fs8:EFI
  Cd EFI ->  debian
  Cd debian -> grubx64.efi
  No further content.
Fs9:  has linux partions and vmlinuz and initrd.img
This was built 8/30 and last updated 9./5. -> the HDD debian instance.
Fs10 - 12: not a correct mapping.

Blk0 - 8:  no content
Blk9: rEFInd stick
Blk10-24 no content
Blk25:  not an internal or external commnd or...
Blk26:   EFI  ->  Same as fs8:
Blk27: has linux partions ...
Blk28:  not content
Blk29: not a correct mapping.

I think there is a way to use this to recover this but have not found it yet.



Re: How to Boot with LVM

2015-09-07 Thread ray
On Monday, September 7, 2015 at 3:40:05 AM UTC-5, Pascal Hambourg wrote:
> ray a écrit :
> > 
> > AMD64 32G RAM
> > sda, sdb 32GB + 32GB, RAID0 - md0, LVM, GParted shows 1MB reserved, 1 GB 
> > (EFI)
> > sdc, sdd 64GB + 64GB, RAID0- md1, md127, LVM, GParted shows 1MB reserved, 1 
> > GB (EFI)
> > sde, sdf 120GB + 120GB, RAID0- md0, md126 LVM, GParted shows 1MB reserved, 
> > 1 GB (EFI)
> ^
> I guess you mean md2 ?
Yes, you are correct, md2.
> 
> > sdg, sdh are 2 and 4 GB HD, sdg currently hosts debian8.+q++q
>   
> I guess you mean TB ? Hard to find 4 GB hard disks these days.
Yes, you are correct, TB.
> 
> > md0-2 form a vg for the new debian8 as dom0 for a xen instance, md127
> > is a vg, md126 is a vg.  md127 & 126 are for swap files.
> 
> Why use separate VGs for swaps ? Performance issues ?
Yes, I chose VGs on two different RAID0 to minimize risk of swap bottle necks, 
and moved them off of the installation RAID0 volume.  I don't know that it will 
be a problem on this machine but it was easy to architect a mitigation and I 
have had this problem on all my other workstations.  
> 
> > gdisk reports that the sda and sdb have gpt partitions.
> 
> You also report that they contain an EFI system partition. This is
> required for booting a disk in UEFI mode (as opposed to legacy BIOS
> mode, which benefits from a BIOS boot partition instead). If the system
> boots in BIOS/legacy mode instead of native UEFI mode, you should
> convert the EFI system partition into a BIOS boot partition or create a
> BIOS boot partition in the 1 MB reserved space.
> 
> Did the Debian installer boot in EFI or BIOS/legacy mode ?
> You can see it in different ways.
> 
> In BIOS mode :
> - it displays the usual well known boot screen from ISOLinux
> - assisted mode using the whole GPT disk creates a BIOS boot partition
> - the "BIOS boot" partition type is available in manual mode on GPT disks
> - in expert mode, GRUB (grub-pc) and LILO are available as the
> bootloader and the installer prompts for the boot device
> 
> In UEFI mode :
> - it displays a different boot screen from GRUB
> - assisted mode using the whole disk creates an EFI system partition
> mounted on /boot/efi
> - the "EFI system" partition type is available in manual mode
> - even in expert mode, only GRUB (grub-efi) is available as the
> bootloader and the installer does not prompts for the boot device.
The motherboard BIOS reports the Debian installation media as a UEFI USB.  
The installer boot screen says UEFI and it is the same media used on the 
harddisk.  
>
> What is the boot mode of the already installed system ?
UEFI.
> 
> > I have attempted again to install a new debian8 on the LVM on RAID0
> > for sda I learned more about using the installer for portioning so I
> > was able to get /boot on a RAID0 partition (outside of the vg).  
> 
> If the installer booted in BIOS mode, what did you select as the boot
> device during GRUB installation ?
It installed in UEFI mode.
> 
> > On rebooting, nothing would boot.  The screen message said to insert
> > the media to boot from.
> 
> So the firmware does not see a boot disk. Did you remove the harddisks
> containing the existing Debian installation ?
The harddisks have not been changed.  This is my second round.  Before I set up 
the RAID, I had a Debian instance on the HD that worked fine and when I 
installed a new instance on the SSD, neither would boot.

So I rebuilt the HD instance, ran it to configure the SSDs, and again, when 
installing to the SSDs, nothing will boot.



Re: How to Boot with LVM

2015-09-06 Thread ray
Pascal,

Thank you for the informative response.  I would like to assure I address your 
concerns it this recovery.
> (...)
> > System description:
> > amd64 with a HDD and 3 pairs of SSDs.  The SSD are set up as RAID0 in
> > pairs.  The HDD currently hosts Debian 8.  I used this to configure the
> > SSDs to overlay the RAID0s with an LVM.  The RAIDs were built on a
> > second partition of each SSD.  The first partition is 1G for potential
> > boot use.  
AMD64 32G RAM
sda, sdb 32GB + 32GB, RAID0 - md0, LVM, GParted shows 1MB reserved, 1 GB (EFI)
sdc, sdd 64GB + 64GB, RAID0- md1, md127, LVM, GParted shows 1MB reserved, 1 GB 
(EFI)
sde, sdf 120GB + 120GB, RAID0- md0, md126 LVM, GParted shows 1MB reserved, 1 GB 
(EFI)
sdg, sdh are 2 and 4 GB HD, sdg currently hosts debian8.+q++q
md0-2 form a vg for the new debian8 as dom0 for a xen instance, md127 is a vg, 
md126 is a vg.  md127 & 126 are for swap files.

> 
> LVM on multiple RAID0 ? You don't care much about reliability, do you ?
> Lose a single disk and you may lose all your LVM.
Yes, there is risk.  This is a personal workstation and will be backed up 
regularly and doesn't need to be up all the time.  I have lost motherboards 
more often the disks, and I have never lost a ssd.  I chose Linux RAID over 
motherboard RAID due to this.

> > I use a USB stick to load the second Debian.  I have a lVM partition
> > for the new installation.  When I select it, the installer (in manual
> > mode) says it is not bootable and go back to setup to correct.  When I
> > go back to setup, I don't see any way to do anything but select a VG,
> > dm, sdx, or HDD.
> 
> I assume the architecture is i386 or amd64.
amd64
> 
> If /boot is on LVM and you choose GRUB as the bootloader in BIOS/legacy
> mode (grub-pc), you must select a whole disk.
> 
> If the disk has an MSDOS partition table, it should have enough
> unallocated space before the first partition ; modern partition
> management tools including the Debian installer will leave ~1 MB
> unallocated space by default, which is enough.
> 
> If the disk has a GPT partition table, it should contain a small
> unformatted partition of type "BIOS boot" or "bios_grub" ; again, 1 MB
> is enough. If the disk is larger than 2 TiB, the BIOS boot partition
> should be located within the 2 TiB boundary so that it can be accessed
> through the BIOS disk functions.
> 
gdisk reports that the sda and sdb have gpt partitions.  > The unallocated 
space or BIOS boot partition is required to install the

> GRUB's core image which can be booted by the GRUB's boot image installed
> in the MBR of the disk. The core image will include modules to be able
> to read RAID and LVM volumes in order to access other required boot
> files located in /boot. Otherwise, GRUB's core image will be installed
> in /boot/grub and retrieved by GRUB's boot image using block lists ;
> however this is less reliable because blocks may be moved around by LVM
> or the filesystem.

I have attempted again to install a new debian8 on the LVM on RAID0 for sda 
I learned more about using the installer for portioning so I was able to get 
/boot on a RAID0 partition (outside of the vg).  

On rebooting, nothing would boot.  The screen message said to insert the media 
to boot from.  Now I get to recover what ever I lost.  I have not figured out 
how to do that.  Any help would be appreciated.



Re: How to Boot with LVM

2015-09-06 Thread Pascal Hambourg
Hello,

ray a écrit :
> I would like to configure LVMs for everything including boot.  I have
> read that others have done this but an I have not found the method.
(...)
> System description:
> amd64 with a HDD and 3 pairs of SSDs.  The SSD are set up as RAID0 in
> pairs.  The HDD currently hosts Debian 8.  I used this to configure the
> SSDs to overlay the RAID0s with an LVM.  The RAIDs were built on a
> second partition of each SSD.  The first partition is 1G for potential
> boot use.  

LVM on multiple RAID0 ? You don't care much about reliability, do you ?
Lose a single disk and you may lose all your LVM.

> I use a USB stick to load the second Debian.  I have a lVM partition
> for the new installation.  When I select it, the installer (in manual
> mode) says it is not bootable and go back to setup to correct.  When I
> go back to setup, I don't see any way to do anything but select a VG,
> dm, sdx, or HDD.

I assume the architecture is i386 or amd64.

If /boot is on LVM and you choose GRUB as the bootloader in BIOS/legacy
mode (grub-pc), you must select a whole disk.

If the disk has an MSDOS partition table, it should have enough
unallocated space before the first partition ; modern partition
management tools including the Debian installer will leave ~1 MB
unallocated space by default, which is enough.

If the disk has a GPT partition table, it should contain a small
unformatted partition of type "BIOS boot" or "bios_grub" ; again, 1 MB
is enough. If the disk is larger than 2 TiB, the BIOS boot partition
should be located within the 2 TiB boundary so that it can be accessed
through the BIOS disk functions.

The unallocated space or BIOS boot partition is required to install the
GRUB's core image which can be booted by the GRUB's boot image installed
in the MBR of the disk. The core image will include modules to be able
to read RAID and LVM volumes in order to access other required boot
files located in /boot. Otherwise, GRUB's core image will be installed
in /boot/grub and retrieved by GRUB's boot image using block lists ;
however this is less reliable because blocks may be moved around by LVM
or the filesystem.