Re: [zfs-discuss] Strange mount -a problem in Solaris 11.1
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Ian Collins Have have a recently upgraded (to Solaris 11.1) test system that fails to mount its filesystems on boot. Running zfs mount -a results in the odd error #zfs mount -a internal error Invalid argument truss shows the last call as ioctl(3, ZFS_IOC_OBJECT_STATS, 0xF706BBB0) The system boots up fine in the original BE. The root (only) pool in a single drive. Any ideas? devfsadm -Cv rm /etc/zfs/zpool.cache init 6 ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Scrub and checksum permutations
On Thu, Oct 25, 2012 at 2:25 AM, Jim Klimov jimkli...@cos.ru wrote: Hello all, I was describing how raidzN works recently, and got myself wondering: does zpool scrub verify all the parity sectors and the mirror halves? Yes. The ZIO_FLAG_SCRUB instructs the raidz or mirror vdev to read and verify all parts of the blocks (parity sectors and mirror copies). The math for RAID-Z is described in detail in the comments of vdev_raidz.c. If there is a checksum error, we reconstitue the data by trying all possible combinations of N incorrect sectors (N being the number of parity disks) -- see vdev_raidz_combrec(). --matt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Scrub and checksum permutations
On Wed, Oct 31, 2012 at 6:47 PM, Matthew Ahrens mahr...@delphix.com wrote: On Thu, Oct 25, 2012 at 2:25 AM, Jim Klimov jimkli...@cos.ru wrote: Hello all, I was describing how raidzN works recently, and got myself wondering: does zpool scrub verify all the parity sectors and the mirror halves? Yes. The ZIO_FLAG_SCRUB instructs the raidz or mirror vdev to read and verify all parts of the blocks (parity sectors and mirror copies). Good to know. The math for RAID-Z is described in detail in the comments of vdev_raidz.c. If there is a checksum error, we reconstitue the data by trying all possible combinations of N incorrect sectors (N being the number of parity disks) -- see vdev_raidz_combrec(). Google gave me this result: http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/vdev_raidz.c It had me slightly concerned because it does the LFSR on single bytes, though for mainly theoretical reasons - for a raidz3 of 258 devices (ill-advised to say the least), using single bytes in the LSFR wouldn't allow the cycle to be long enough to have the parity protect against two specific failures. However, I tested whether zpool create checks for this by creating 300 100MB files and attempting to make them into a raidz3 pool, and got this: $ zpool create -n -o cachefile=none testlargeraidz raidz3 `pwd`/* invalid vdev specification: raidz3 supports no more than 255 devices Same story for raidz and raidz2. So, looks like they already thought of this too. Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Strange mount -a problem in Solaris 11.1
On 10/31/12 23:35, Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Ian Collins Have have a recently upgraded (to Solaris 11.1) test system that fails to mount its filesystems on boot. Running zfs mount -a results in the odd error #zfs mount -a internal error Invalid argument truss shows the last call as ioctl(3, ZFS_IOC_OBJECT_STATS, 0xF706BBB0) The system boots up fine in the original BE. The root (only) pool in a single drive. Any ideas? devfsadm -Cv rm /etc/zfs/zpool.cache init 6 That was a big enough stick to fix it. Nasty bug none the less. -- Ian. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss