Re: System fails to boot with zfs root on nvme mirror when not using full disks

2017-10-23 Thread Markus Wild
I have found the problem... I seem to be a victim of 
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=221075

I wasn't able to get the system to boot just by using /dev/nvd0p3 and 
/dev/nvd1p3 (avoiding labels), but I
did succeed with a custom kernel as per the PR:

include GENERIC

ident NOMMC
nodevice mmc
nodevice mmcsd
nodevice sdhci


So, there is indeed a problem with that part...

Cheers,
Markus
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: System fails to boot with zfs root on nvme mirror when not using full disks

2017-10-22 Thread Markus Wild
Hello Gary

thanks for your input!

> My suspicion is that you didn't make a backup of the new
> /boot/zfs/zpool.cache
> 
> From the USB live booted system, when you recreated the zpool you
> likely changed parameters that live in the cache.  You need to copy
> that to the new boot pool or ZFS can get angry

It's a possibility... new attempt from usb live system:

gpart delete -i 3 nvd0
gpart delete -i 3 nvd1
gpart add -a 4k -s 50G -t freebsd-zfs -l zfs0 nvd0
gpart add -a 4k -s 50G -t freebsd-zfs -l zfs1 nvd1
sysctl vfs.zfs.min_auto_ashift=12
zpool create -m none -o cachefile=/var/tmp/zpool.cache zroot mirror 
/dev/gpt/zfs0 /dev/gpt/zfs1
zfs receive -Fud zroot < zroot.zfs
zpool set bootfs=zroot/ROOT/default zroot
mount -t zfs zroot/ROOT/default /mnt
cp /var/tmp/zpool.cache /mnt/boot/zfs/zpool.cache
umount /mnt
reboot


Unfortunately, this didn't change anything. I'm still getting the

Trying to mount root from zfs:zroot/ROOT/default []...
Mounting from zfs:zroot/ROOT/default failed with error 6.

error :(

Any other ideas?

Cheers,
Markus
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: System fails to boot with zfs root on nvme mirror when not using full disks

2017-10-20 Thread Gary Palmer
On Fri, Oct 20, 2017 at 05:13:45PM +0200, Markus Wild wrote:
> Hello list,
> 
> I have a particularly odd problem, where I can't figure out what's going on, 
> or whether I'm possibly doing something
> stupid...
> 
> Short summary:
> - Supermicro X10DRI-T motherboard, using UEFI boot, 128G RAM, 2 CPUs
> - 2 Intel DC P3700 NVME cards, to be used as mirrored zfs root and mirrored 
> log devices for a data pool
> - FreeBSD-11.1 release installs fine off memstick, and the built system boots 
> correctly, but this system uses the entire
>   disks (so I need to shrink the system to make room for log partitions)
> - so I then rebooted into usb live system, and did a gpart backup of the nvme 
> drives (see later). zfs snapshot -r, zfs
>   send -R > backup, gpart delete last index and recreate with shorter size on 
> both drives, recreate zroot pool with
>   correct ashift, restore with zfs receive from backup, set bootfs, reboot
> - the rebooting system bootloader finds the zroot pool correctly, and 
> proceeds to load the kernel. However, when it's
>   supposed to mount the root filesystem, I get:
> Trying to mount root from zfs:zroot/ROOT/default []...
> Mounting from zfs:zroot/ROOT/default failed with error 6.
> - when I list the available boot devices, all partitions of the nvme disks 
> are listed

My suspicion is that you didn't make a backup of the new
/boot/zfs/zpool.cache

>From the USB live booted system, when you recreated the zpool you
likely changed parameters that live in the cache.  You need to copy that
to the new boot pool or ZFS can get angry

Just a suspicion

Regards,

Gary
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


System fails to boot with zfs root on nvme mirror when not using full disks

2017-10-20 Thread Markus Wild
Hello list,

I have a particularly odd problem, where I can't figure out what's going on, or 
whether I'm possibly doing something
stupid...

Short summary:
- Supermicro X10DRI-T motherboard, using UEFI boot, 128G RAM, 2 CPUs
- 2 Intel DC P3700 NVME cards, to be used as mirrored zfs root and mirrored log 
devices for a data pool
- FreeBSD-11.1 release installs fine off memstick, and the built system boots 
correctly, but this system uses the entire
  disks (so I need to shrink the system to make room for log partitions)
- so I then rebooted into usb live system, and did a gpart backup of the nvme 
drives (see later). zfs snapshot -r, zfs
  send -R > backup, gpart delete last index and recreate with shorter size on 
both drives, recreate zroot pool with
  correct ashift, restore with zfs receive from backup, set bootfs, reboot
- the rebooting system bootloader finds the zroot pool correctly, and proceeds 
to load the kernel. However, when it's
  supposed to mount the root filesystem, I get:
Trying to mount root from zfs:zroot/ROOT/default []...
Mounting from zfs:zroot/ROOT/default failed with error 6.
- when I list the available boot devices, all partitions of the nvme disks are 
listed
- I can import this pool without any issues with the usb live system, there are 
no errors on import.
- I then redid the whole exercise, but restored the previously backed up 
partition tables (full size zfs
  partition), did exactly the same steps as described above to restore the 
previous zfs filesystem, rebooted, and the
  system started up normally.

So, my impression is: the kernel is doing something odd when trying to import 
the zroot pool, if that pool isn't 
using the whole physical disk space. Is there some way to enable debugging for 
the "mount a root zfs pool" process of
the kernel, so see where this is failing?

Here's a more detailled history of my commands to reduce the pool size:

{I'm on a data pool directory here, where I can store stuff. I've booted from 
usb stick into live system}
zpool import -Nf zroot
zfs snapshot -r zroot@backup
zfs send -R zroot@backup > zroot.zfs
zpool destroy zroot
gpart delete -i 3 nvd0
gpart delete -i 3 nvd1
gpart add -a 4k -s 50G -t freebsd-zfs -l zfs0 nvd0
gpart add -a 4k -s 50G -t freebsd-zfs -l zfs1 nvd1
sysctl vfs.zfs.min_auto_ashift=12
zpool create -f -o altroot=/mnt -o cachefile=/var/tmp/zpool.cache zroot mirror 
/dev/gpt/zfs0 /dev/gpt/zfs1
zfs receive -Fud zroot < zroot.zfs
zpool set bootfs=zroot/ROOT/default zroot
reboot

Also, here's a diff between full and partial gpart backup:

--- nvd0.gpart  2017-10-20 14:04:26.583846000 +0200
+++ nvd0.gpart.new  2017-10-20 15:39:20.184445000 +0200
@@ -1,4 +1,4 @@
 GPT 152
 1efi40409600 efiboot0 
 2   freebsd-swap411648  33554432 swap0 
-3freebsd-zfs  33966080 747456512 zfs0 
+3freebsd-zfs  33966080 104857600 zfs0 


I'm really at a loss to see the reason why this is failing. I can provide 
detailled dmesg output of the booting system,
and screenshots of the kvm-efi-console of the failing boot. 

Thanks for any help:)

Cheers,
Markus
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"