Re: ZFS i/o error in recent 12.0

2018-03-22 Thread KIRIYAMA Kazuhiko
At Wed, 21 Mar 2018 08:01:24 +0100,
Thomas Steen Rasmussen wrote:
> 
> On 03/20/2018 12:00 AM, KIRIYAMA Kazuhiko wrote:
> > Hi,
> >
> > I've been encountered suddenly death in ZFS full volume
> > machine(r330434) about 10 days after installation[1]:
> >
> > ZFS: i/o error - all block copies unavailable
> > ZFS: can't read MOS of pool zroot
> > gptzfsboot: failed to mount default pool zroot
> >
> > FreeBSD/x86 boot
> > ZFS: i/o error - all block copies unavailable
> > ZFS: can't find dataset u
> > Default: zroot/<0x0>:
> > boot: 
> 
> Has this pool had new vdevs addded to it since the server was installed?

No. /dev/mfid0p4 is a RAID60 disk of AVAGO MegaRAID driver[1].

> What does a "zpool status" look like when the pool is imported?

Like below:

root@t1:~ # zpool import -fR /mnt zroot
root@t1:~ # zpool status
  pool: zroot
 state: ONLINE
  scan: none requested
config:

NAMESTATE READ WRITE CKSUM
zroot   ONLINE   0 0 0
  mfid0p4   ONLINE   0 0 0

errors: No known data errors
root@t1:~ # 

[1] http://ds.truefc.org/~kiri/freebsd/current/zfs/dmesg.boot

> 
> /Thomas
> 
> ___
> freebsd-current@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-current
> To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
> 

---
KIRIYAMA Kazuhiko
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: ZFS i/o error in recent 12.0

2018-03-21 Thread Markus Wild
Hello Thomas,

> > I had faced the exact same issue on a HP Microserver G8 with 8TB disks and 
> > a 16TB zpool on FreeBSD 11 about a year
> > ago.  
> I will ask you the same question as I asked the OP:
> 
> Has this pool had new vdevs addded to it since the server was installed?

No. This is a microserver with only 4 (not even hotplug) trays. It was set up 
using the freebsd installer 
originally. I had to apply the (then patch, don't know whether it's included 
standard now) btx loader fix to retry
a failed read to get around BIOS bugs with that server, but after that, the 
server booted fine. It's only after
a bit of use and a kernel update that things went south. I tried many different 
things at that time, but the only
approach that worked for me was to steal 2 of the 4 swap partitions which I 
placed on every disk initially, and 
build a mirrored boot zpool from those. The loader had no problem loading the 
kernel from that, and when the kernel
took over, it had no problem using the original root pool (that the boot loader 
wasn't able to find/load). Whence my
conclusion that the 2nd stage boot loader has a problem (probably due to yet 
another bios bug on that server) loading
blocks beyond a certain limit, which could be 2TB or 4TB.

> What does a "zpool status" look like when the pool is imported?

$ zpool status
  pool: zboot
 state: ONLINE
  scan: scrub repaired 0 in 0h0m with 0 errors on Wed Mar 21 03:58:36 2018
config:

NAME   STATE READ WRITE CKSUM
zboot  ONLINE   0 0 0
  mirror-0 ONLINE   0 0 0
gpt/zfs-boot0  ONLINE   0 0 0
gpt/zfs-boot1  ONLINE   0 0 0

errors: No known data errors

  pool: zroot
 state: ONLINE
  scan: scrub repaired 0 in 6h49m with 0 errors on Sat Mar 10 10:17:49 2018
config:

NAME  STATE READ WRITE CKSUM
zroot ONLINE   0 0 0
  mirror-0ONLINE   0 0 0
gpt/zfs0  ONLINE   0 0 0
gpt/zfs1  ONLINE   0 0 0
  mirror-1ONLINE   0 0 0
gpt/zfs2  ONLINE   0 0 0
gpt/zfs3  ONLINE   0 0 0

errors: No known data errors

Please note: this server is in use at a customer now, it's workin fine with 
this workaround. I just brought it up 
to give a possible explanation to the observed problem of the original poster, 
and that it _might_ have nothing to do
with a newer version of the current kernel, but rather be due to the updated 
kernel being written to a new location
on disk, which can't be read properly by the boot loader.

Cheers,
Markus
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: ZFS i/o error in recent 12.0

2018-03-21 Thread Thomas Steen Rasmussen
On 03/20/2018 08:50 AM, Markus Wild wrote:
>
> I had faced the exact same issue on a HP Microserver G8 with 8TB disks and a 
> 16TB zpool on FreeBSD 11 about a year ago.
Hello,

I will ask you the same question as I asked the OP:

Has this pool had new vdevs addded to it since the server was installed?
What does a "zpool status" look like when the pool is imported?

Explanation: Some controllers only make a small fixed number of devices
visible to the bios during boot. Imagine a zpool was booted with, say, 4
disks in a pool, and 4 more was added. If the HBA only shows 4 drives to
the bios during boot, you see this error.

If you think this might be relevant you need to chase down a setting
called "maximum int13 devices for this adapter" or something like that.
See page 3-4 in this documentation:
https://supermicro.com/manuals/other/LSI_HostRAID_2308.pdf

The setting has been set to 4 on a bunch of servers I've bought over the
last years. Then you install the server with 4 disks, later add new
disks, reboot one day and nothing works until you set it high enough
that the bootloader can see the whole pool, and you're good again.

/Thomas

___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: ZFS i/o error in recent 12.0

2018-03-21 Thread Thomas Steen Rasmussen
On 03/20/2018 12:00 AM, KIRIYAMA Kazuhiko wrote:
> Hi,
>
> I've been encountered suddenly death in ZFS full volume
> machine(r330434) about 10 days after installation[1]:
>
> ZFS: i/o error - all block copies unavailable
> ZFS: can't read MOS of pool zroot
> gptzfsboot: failed to mount default pool zroot
>
> FreeBSD/x86 boot
> ZFS: i/o error - all block copies unavailable
> ZFS: can't find dataset u
> Default: zroot/<0x0>:
> boot: 

Has this pool had new vdevs addded to it since the server was installed?
What does a "zpool status" look like when the pool is imported?

/Thomas

___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: ZFS i/o error in recent 12.0

2018-03-20 Thread Allan Jude
On 2018-03-20 10:29, Andriy Gapon wrote:
> On 20/03/2018 09:09, Trond Endrestøl wrote:
>> This step has been big no-no in the past. Never leave your 
>> bootpool/rootpool in an exported state if you intend to boot from it. 
>> For all I know, this advice might be superstition for the present 
>> versions of FreeBSD.
> 
> Yes, it is.  That does not matter at all now.
> 
>> From what I can tell from the above, you never created a new 
>> zpool.cache and copied it to its rightful place.
> 
> For the _rooot_ pool zpool.cache does not matter as well.
> It matters only for auto-import of additional pools, if any.
> 

As I mentioned previously, the error reported by the user is before it
is even possible to read zpool.cache, so it is definitely not the source
of the problem.

-- 
Allan Jude
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: ZFS i/o error in recent 12.0

2018-03-20 Thread Andriy Gapon
On 20/03/2018 09:09, Trond Endrestøl wrote:
> This step has been big no-no in the past. Never leave your 
> bootpool/rootpool in an exported state if you intend to boot from it. 
> For all I know, this advice might be superstition for the present 
> versions of FreeBSD.

Yes, it is.  That does not matter at all now.

> From what I can tell from the above, you never created a new 
> zpool.cache and copied it to its rightful place.

For the _rooot_ pool zpool.cache does not matter as well.
It matters only for auto-import of additional pools, if any.

-- 
Andriy Gapon
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: ZFS i/o error in recent 12.0

2018-03-20 Thread Toomas Soome


> On 20 Mar 2018, at 09:50, Markus Wild  wrote:
> 
> Hi there,
> 
>> I've been encountered suddenly death in ZFS full volume
>> machine(r330434) about 10 days after installation[1]:
>> 
>> ZFS: i/o error - all block copies unavailable
>> ZFS: can't read MOS of pool zroot
>> gptzfsboot: failed to mount default pool zroot
>> 
> 
>>268847104  30978715648  4  freebsd-zfs  (14T)
> 
> ^^^
> 
> 
> I had faced the exact same issue on a HP Microserver G8 with 8TB disks and a 
> 16TB zpool on FreeBSD 11 about a year ago.
> My conclusion was, that over time (and updating the kernel), the blocks for 
> that kernel file were reallocated to a
> later spot on the disks, and that however the loader fetches those blocks, it 
> now failed doing so (perhaps a 2/4TB
> limit/bug with the BIOS of that server? Unfortunately, there was no UEFI 
> support for it, don't know whether that
> changed in the meantime). The pool was always importable fine with the USB 
> stick, the problem was only with the boot
> loader. I worked around the problem stealing space from the swap partitions 
> on two disks to build a "zboot" pool, just
> containing the /boot directory, having the boot loader load the kernel from 
> there, and then still mount the real root
> pool to run the system off using loader-variables in loader.conf of the boot 
> pool. It's a hack, but it's working
> fine since (the server is being used as a backup repository). This is what I 
> have in the "zboot" boot/loader.conf:
> 
> # zfs boot kludge due to buggy bios
> vfs.root.mountfrom="zfs:zroot/ROOT/fbsd11"
> 
> 
> If you're facing the same problem, you might give this a shot? You seem to 
> have plenty of swap to canibalize as well;)
> 

please check with lsdev -v from loader OK prompt - do the reported 
disk/partition sizes make sense. Another thing is, even if you do update the 
current build, you want to make sure your installed boot blocks are updated as 
well - otherwise you will have new binary in the /boot directory, but it is not 
installed on boot block area…

rgds,
toomas

___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: ZFS i/o error in recent 12.0

2018-03-20 Thread Markus Wild
Hi there,

> I've been encountered suddenly death in ZFS full volume
> machine(r330434) about 10 days after installation[1]:
> 
> ZFS: i/o error - all block copies unavailable
> ZFS: can't read MOS of pool zroot
> gptzfsboot: failed to mount default pool zroot
> 

> 268847104  30978715648  4  freebsd-zfs  (14T)

^^^


I had faced the exact same issue on a HP Microserver G8 with 8TB disks and a 
16TB zpool on FreeBSD 11 about a year ago.
My conclusion was, that over time (and updating the kernel), the blocks for 
that kernel file were reallocated to a
later spot on the disks, and that however the loader fetches those blocks, it 
now failed doing so (perhaps a 2/4TB
limit/bug with the BIOS of that server? Unfortunately, there was no UEFI 
support for it, don't know whether that
changed in the meantime). The pool was always importable fine with the USB 
stick, the problem was only with the boot
loader. I worked around the problem stealing space from the swap partitions on 
two disks to build a "zboot" pool, just
containing the /boot directory, having the boot loader load the kernel from 
there, and then still mount the real root
pool to run the system off using loader-variables in loader.conf of the boot 
pool. It's a hack, but it's working
fine since (the server is being used as a backup repository). This is what I 
have in the "zboot" boot/loader.conf:

# zfs boot kludge due to buggy bios
vfs.root.mountfrom="zfs:zroot/ROOT/fbsd11"


If you're facing the same problem, you might give this a shot? You seem to have 
plenty of swap to canibalize as well;)

Cheers,
Markus




___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: ZFS i/o error in recent 12.0

2018-03-20 Thread Trond Endrestøl
On Tue, 20 Mar 2018 08:00+0900, KIRIYAMA Kazuhiko wrote:

> Hi,
> 
> I've been encountered suddenly death in ZFS full volume
> machine(r330434) about 10 days after installation[1]:
> 
> ZFS: i/o error - all block copies unavailable
> ZFS: can't read MOS of pool zroot
> gptzfsboot: failed to mount default pool zroot
> 
> FreeBSD/x86 boot
> ZFS: i/o error - all block copies unavailable
> ZFS: can't find dataset u
> Default: zroot/<0x0>:
> boot: 
> 
> Partition is bellow:
> 
>  # gpart show /dev/mfid0
> => 40  31247564720  mfid0  GPT  (15T)
>40   409600  1  efi  (200M)
>409640 1024  2  freebsd-boot  (512K)
>410664  984 - free -  (492K)
>411648268435456  3  freebsd-swap  (128G)
> 268847104  30978715648  4  freebsd-zfs  (14T)
>   31247562752 2008 - free -  (1.0M)
> 
> # 
> 
> But nothing had beed happend in old current ZFS full volume
> machine(r327038M). According to [2] the reason is boot/zfs/zpool.cache
> inconsistent. I've tried to cope with this by repairing
> /boot [3] from rescue bootable USB as follows:
> 
> # kldload zfs
> # zpool import 
>pool: zroot
>  id: 17762298124265859537
>   state: ONLINE
>  action: The pool can be imported using its name or numeric identifier.
>  config:
> 
> zroot   ONLINE
>   mfid0p4   ONLINE
> # zpool import -fR /mnt zroot
> # df -h
> Filesystem  SizeUsed   Avail Capacity  Mounted on
> /dev/da0p2   14G1.6G 11G13%/
> devfs   1.0K1.0K  0B   100%/dev
> zroot/.dake  14T 18M 14T 0%/mnt/.dake
> zroot/ds 14T 96K 14T 0%/mnt/ds
> zroot/ds/backup  14T 88K 14T 0%/mnt/ds/backup
> zroot/ds/backup/kazu.pis 14T 31G 14T 0%
> /mnt/ds/backup/kazu.pis
> zroot/ds/distfiles   14T7.9M 14T 0%/mnt/ds/distfiles
> zroot/ds/obj 14T 10G 14T 0%/mnt/ds/obj
> zroot/ds/packages14T4.0M 14T 0%/mnt/ds/packages
> zroot/ds/ports   14T1.3G 14T 0%/mnt/ds/ports
> zroot/ds/src 14T2.6G 14T 0%/mnt/ds/src
> zroot/tmp14T 88K 14T 0%/mnt/tmp
> zroot/usr/home   14T136K 14T 0%/mnt/usr/home
> zroot/usr/local  14T 10M 14T 0%/mnt/usr/local
> zroot/var/audit  14T 88K 14T 0%/mnt/var/audit
> zroot/var/crash  14T 88K 14T 0%/mnt/var/crash
> zroot/var/log14T388K 14T 0%/mnt/var/log
> zroot/var/mail   14T 92K 14T 0%/mnt/var/mail
> zroot/var/ports  14T 11M 14T 0%/mnt/var/ports
> zroot/var/tmp14T6.0M 14T 0%/mnt/var/tmp
> zroot/vm 14T2.8G 14T 0%/mnt/vm
> zroot/vm/tbedfc  14T1.6G 14T 0%/mnt/vm/tbedfc
> zroot14T 88K 14T 0%/mnt/zroot
> # zfs list
> NAME   USED  AVAIL  REFER  MOUNTPOINT
> zroot 51.1G  13.9T88K  /mnt/zroot
> zroot/.dake   18.3M  13.9T  18.3M  /mnt/.dake
> zroot/ROOT1.71G  13.9T88K  none
> zroot/ROOT/default1.71G  13.9T  1.71G  /mnt/mnt
> zroot/ds  45.0G  13.9T96K  /mnt/ds
> zroot/ds/backup   30.8G  13.9T88K  /mnt/ds/backup
> zroot/ds/backup/kazu.pis  30.8G  13.9T  30.8G  /mnt/ds/backup/kazu.pis
> zroot/ds/distfiles7.88M  13.9T  7.88M  /mnt/ds/distfiles
> zroot/ds/obj  10.4G  13.9T  10.4G  /mnt/ds/obj
> zroot/ds/packages 4.02M  13.9T  4.02M  /mnt/ds/packages
> zroot/ds/ports1.26G  13.9T  1.26G  /mnt/ds/ports
> zroot/ds/src  2.56G  13.9T  2.56G  /mnt/ds/src
> zroot/tmp   88K  13.9T88K  /mnt/tmp
> zroot/usr 10.4M  13.9T88K  /mnt/usr
> zroot/usr/home 136K  13.9T   136K  /mnt/usr/home
> zroot/usr/local   10.2M  13.9T  10.2M  /mnt/usr/local
> zroot/var 17.4M  13.9T88K  /mnt/var
> zroot/var/audit 88K  13.9T88K  /mnt/var/audit
> zroot/var/crash 88K  13.9T88K  /mnt/var/crash
> zroot/var/log  388K  13.9T   388K  /mnt/var/log
> zroot/var/mail  92K  13.9T92K  /mnt/var/mail
> zroot/var/ports   10.7M  13.9T  10.7M  /mnt/var/ports
> zroot/var/tmp 5.98M  13.9T  5.98M  /mnt/var/tmp
> zroot/vm  4.33G  13.9T  2.75G  /mnt/vm
> zroot/vm/tbedfc   1.58G  13.9T  1.58G  /mnt/vm/tbedfc
> # zfs mount zroot/ROOT/default
> # cd /mnt/mnt/
> # mv boot boot.bak
> # cp -RPp boot.bak boot
> # gpart show /dev/mfid0
> => 40  31247564720  mfid0  GPT  (15T)
>  

Re: ZFS i/o error in recent 12.0

2018-03-19 Thread Allan Jude
On 2018-03-19 19:00, KIRIYAMA Kazuhiko wrote:
> Hi,
> 
> I've been encountered suddenly death in ZFS full volume
> machine(r330434) about 10 days after installation[1]:
> 
> ZFS: i/o error - all block copies unavailable
> ZFS: can't read MOS of pool zroot
> gptzfsboot: failed to mount default pool zroot
> 
> FreeBSD/x86 boot
> ZFS: i/o error - all block copies unavailable
> ZFS: can't find dataset u
> Default: zroot/<0x0>:
> boot: 
> 
> Partition is bellow:
> 
>  # gpart show /dev/mfid0
> => 40  31247564720  mfid0  GPT  (15T)
>40   409600  1  efi  (200M)
>409640 1024  2  freebsd-boot  (512K)
>410664  984 - free -  (492K)
>411648268435456  3  freebsd-swap  (128G)
> 268847104  30978715648  4  freebsd-zfs  (14T)
>   31247562752 2008 - free -  (1.0M)
> 
> # 
> 
> But nothing had beed happend in old current ZFS full volume
> machine(r327038M). According to [2] the reason is boot/zfs/zpool.cache
> inconsistent. I've tried to cope with this by repairing
> /boot [3] from rescue bootable USB as follows:
> 
> # kldload zfs
> # zpool import 
>pool: zroot
>  id: 17762298124265859537
>   state: ONLINE
>  action: The pool can be imported using its name or numeric identifier.
>  config:
> 
> zroot   ONLINE
>   mfid0p4   ONLINE
> # zpool import -fR /mnt zroot
> # df -h
> Filesystem  SizeUsed   Avail Capacity  Mounted on
> /dev/da0p2   14G1.6G 11G13%/
> devfs   1.0K1.0K  0B   100%/dev
> zroot/.dake  14T 18M 14T 0%/mnt/.dake
> zroot/ds 14T 96K 14T 0%/mnt/ds
> zroot/ds/backup  14T 88K 14T 0%/mnt/ds/backup
> zroot/ds/backup/kazu.pis 14T 31G 14T 0%
> /mnt/ds/backup/kazu.pis
> zroot/ds/distfiles   14T7.9M 14T 0%/mnt/ds/distfiles
> zroot/ds/obj 14T 10G 14T 0%/mnt/ds/obj
> zroot/ds/packages14T4.0M 14T 0%/mnt/ds/packages
> zroot/ds/ports   14T1.3G 14T 0%/mnt/ds/ports
> zroot/ds/src 14T2.6G 14T 0%/mnt/ds/src
> zroot/tmp14T 88K 14T 0%/mnt/tmp
> zroot/usr/home   14T136K 14T 0%/mnt/usr/home
> zroot/usr/local  14T 10M 14T 0%/mnt/usr/local
> zroot/var/audit  14T 88K 14T 0%/mnt/var/audit
> zroot/var/crash  14T 88K 14T 0%/mnt/var/crash
> zroot/var/log14T388K 14T 0%/mnt/var/log
> zroot/var/mail   14T 92K 14T 0%/mnt/var/mail
> zroot/var/ports  14T 11M 14T 0%/mnt/var/ports
> zroot/var/tmp14T6.0M 14T 0%/mnt/var/tmp
> zroot/vm 14T2.8G 14T 0%/mnt/vm
> zroot/vm/tbedfc  14T1.6G 14T 0%/mnt/vm/tbedfc
> zroot14T 88K 14T 0%/mnt/zroot
> # zfs list
> NAME   USED  AVAIL  REFER  MOUNTPOINT
> zroot 51.1G  13.9T88K  /mnt/zroot
> zroot/.dake   18.3M  13.9T  18.3M  /mnt/.dake
> zroot/ROOT1.71G  13.9T88K  none
> zroot/ROOT/default1.71G  13.9T  1.71G  /mnt/mnt
> zroot/ds  45.0G  13.9T96K  /mnt/ds
> zroot/ds/backup   30.8G  13.9T88K  /mnt/ds/backup
> zroot/ds/backup/kazu.pis  30.8G  13.9T  30.8G  /mnt/ds/backup/kazu.pis
> zroot/ds/distfiles7.88M  13.9T  7.88M  /mnt/ds/distfiles
> zroot/ds/obj  10.4G  13.9T  10.4G  /mnt/ds/obj
> zroot/ds/packages 4.02M  13.9T  4.02M  /mnt/ds/packages
> zroot/ds/ports1.26G  13.9T  1.26G  /mnt/ds/ports
> zroot/ds/src  2.56G  13.9T  2.56G  /mnt/ds/src
> zroot/tmp   88K  13.9T88K  /mnt/tmp
> zroot/usr 10.4M  13.9T88K  /mnt/usr
> zroot/usr/home 136K  13.9T   136K  /mnt/usr/home
> zroot/usr/local   10.2M  13.9T  10.2M  /mnt/usr/local
> zroot/var 17.4M  13.9T88K  /mnt/var
> zroot/var/audit 88K  13.9T88K  /mnt/var/audit
> zroot/var/crash 88K  13.9T88K  /mnt/var/crash
> zroot/var/log  388K  13.9T   388K  /mnt/var/log
> zroot/var/mail  92K  13.9T92K  /mnt/var/mail
> zroot/var/ports   10.7M  13.9T  10.7M  /mnt/var/ports
> zroot/var/tmp 5.98M  13.9T  5.98M  /mnt/var/tmp
> zroot/vm  4.33G  13.9T  2.75G  /mnt/vm
> zroot/vm/tbedfc   1.58G  13.9T  1.58G  /mnt/vm/tbedfc
> # zfs mount zroot/ROOT/default
> # cd /mnt/mnt/
> # mv boot boot.bak
> # cp -RPp boot.bak boot
> # gpart show /dev/mfid0
> => 40  31247564720  mfid0  GPT  (15T)
>40   

ZFS i/o error in recent 12.0

2018-03-19 Thread KIRIYAMA Kazuhiko
Hi,

I've been encountered suddenly death in ZFS full volume
machine(r330434) about 10 days after installation[1]:

ZFS: i/o error - all block copies unavailable
ZFS: can't read MOS of pool zroot
gptzfsboot: failed to mount default pool zroot

FreeBSD/x86 boot
ZFS: i/o error - all block copies unavailable
ZFS: can't find dataset u
Default: zroot/<0x0>:
boot: 

Partition is bellow:

 # gpart show /dev/mfid0
=> 40  31247564720  mfid0  GPT  (15T)
   40   409600  1  efi  (200M)
   409640 1024  2  freebsd-boot  (512K)
   410664  984 - free -  (492K)
   411648268435456  3  freebsd-swap  (128G)
268847104  30978715648  4  freebsd-zfs  (14T)
  31247562752 2008 - free -  (1.0M)

# 

But nothing had beed happend in old current ZFS full volume
machine(r327038M). According to [2] the reason is boot/zfs/zpool.cache
inconsistent. I've tried to cope with this by repairing
/boot [3] from rescue bootable USB as follows:

# kldload zfs
# zpool import 
   pool: zroot
 id: 17762298124265859537
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

zroot   ONLINE
  mfid0p4   ONLINE
# zpool import -fR /mnt zroot
# df -h
Filesystem  SizeUsed   Avail Capacity  Mounted on
/dev/da0p2   14G1.6G 11G13%/
devfs   1.0K1.0K  0B   100%/dev
zroot/.dake  14T 18M 14T 0%/mnt/.dake
zroot/ds 14T 96K 14T 0%/mnt/ds
zroot/ds/backup  14T 88K 14T 0%/mnt/ds/backup
zroot/ds/backup/kazu.pis 14T 31G 14T 0%
/mnt/ds/backup/kazu.pis
zroot/ds/distfiles   14T7.9M 14T 0%/mnt/ds/distfiles
zroot/ds/obj 14T 10G 14T 0%/mnt/ds/obj
zroot/ds/packages14T4.0M 14T 0%/mnt/ds/packages
zroot/ds/ports   14T1.3G 14T 0%/mnt/ds/ports
zroot/ds/src 14T2.6G 14T 0%/mnt/ds/src
zroot/tmp14T 88K 14T 0%/mnt/tmp
zroot/usr/home   14T136K 14T 0%/mnt/usr/home
zroot/usr/local  14T 10M 14T 0%/mnt/usr/local
zroot/var/audit  14T 88K 14T 0%/mnt/var/audit
zroot/var/crash  14T 88K 14T 0%/mnt/var/crash
zroot/var/log14T388K 14T 0%/mnt/var/log
zroot/var/mail   14T 92K 14T 0%/mnt/var/mail
zroot/var/ports  14T 11M 14T 0%/mnt/var/ports
zroot/var/tmp14T6.0M 14T 0%/mnt/var/tmp
zroot/vm 14T2.8G 14T 0%/mnt/vm
zroot/vm/tbedfc  14T1.6G 14T 0%/mnt/vm/tbedfc
zroot14T 88K 14T 0%/mnt/zroot
# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
zroot 51.1G  13.9T88K  /mnt/zroot
zroot/.dake   18.3M  13.9T  18.3M  /mnt/.dake
zroot/ROOT1.71G  13.9T88K  none
zroot/ROOT/default1.71G  13.9T  1.71G  /mnt/mnt
zroot/ds  45.0G  13.9T96K  /mnt/ds
zroot/ds/backup   30.8G  13.9T88K  /mnt/ds/backup
zroot/ds/backup/kazu.pis  30.8G  13.9T  30.8G  /mnt/ds/backup/kazu.pis
zroot/ds/distfiles7.88M  13.9T  7.88M  /mnt/ds/distfiles
zroot/ds/obj  10.4G  13.9T  10.4G  /mnt/ds/obj
zroot/ds/packages 4.02M  13.9T  4.02M  /mnt/ds/packages
zroot/ds/ports1.26G  13.9T  1.26G  /mnt/ds/ports
zroot/ds/src  2.56G  13.9T  2.56G  /mnt/ds/src
zroot/tmp   88K  13.9T88K  /mnt/tmp
zroot/usr 10.4M  13.9T88K  /mnt/usr
zroot/usr/home 136K  13.9T   136K  /mnt/usr/home
zroot/usr/local   10.2M  13.9T  10.2M  /mnt/usr/local
zroot/var 17.4M  13.9T88K  /mnt/var
zroot/var/audit 88K  13.9T88K  /mnt/var/audit
zroot/var/crash 88K  13.9T88K  /mnt/var/crash
zroot/var/log  388K  13.9T   388K  /mnt/var/log
zroot/var/mail  92K  13.9T92K  /mnt/var/mail
zroot/var/ports   10.7M  13.9T  10.7M  /mnt/var/ports
zroot/var/tmp 5.98M  13.9T  5.98M  /mnt/var/tmp
zroot/vm  4.33G  13.9T  2.75G  /mnt/vm
zroot/vm/tbedfc   1.58G  13.9T  1.58G  /mnt/vm/tbedfc
# zfs mount zroot/ROOT/default
# cd /mnt/mnt/
# mv boot boot.bak
# cp -RPp boot.bak boot
# gpart show /dev/mfid0
=> 40  31247564720  mfid0  GPT  (15T)
   40   409600  1  efi  (200M)
   409640 1024  2  freebsd-boot  (512K)
   410664  984 - free -  (492K)
   411648268435456  3  freebsd-swap  (128G)
268847104  30978715648  4  freebsd-zfs  (14T)