Joseph,
The following are the steps I have used to configure ZFS on AVS.
Primary is the server with the primary replication disk. Secondary
is the server with the secondary replication disk.
The reverse synchronization from Secondary (after some
modifications) to Primary always resulted in data corruption on the
Primary (as indicated #failure point).
The replication disks (/dev/rdsk/c0t9d0s0 and /dev/rdsk/c0t1d0s0)
are EFI labeled while the bitmap volumes (/dev/rdsk/c0t10d0s3 and /
dev/rdsk/c0t0d0s3) are Solaris standard disk partitions (slides).
/dev/rdsk/c0t9d0s0 is 36GB running on Solaris Express b60 SPARC.
dev/rdsk/c0t1d0s0 is 146GB running on Solaris Express b60 x86. The
bitmap volumes are 100MB in size and reside on a different disk.
Are you sure that the disk /dev/rdsk/c0t9d0s0, on SPARC is EFI
labeled, not Solaris VTOC labeled? The reason being is that the first
16 blocks of a VTOC label disk are skipped over by ZFS on SPARC, and
of course there is no VTOC on an EFI label disk, meaning that the
first 16 blocks are not skipped.
Is there a reason that the SNDR primary slice is 36GB, where as the
SNDR secondary is 146GB? What sized bitmap do you use on both ends of
the SNDR replica, hopefully the size of the SNDR primary volume.
Jim
Did I miss any step? Can anybody help?
----------------------
(Primary,Secondary)
sndradm -nE e250 /dev/rdsk/c0t9d0s0 /dev/rdsk/c0t10d0s3 avsx86 /dev/
rdsk/c0t1d0s0 /dev/rdsk/c0t0d0s3 ip sync g avsgroup
(Primary)
zpool create avsgroup c0t9d0
sndradm -g avsgroup -nu
zfs create avsgroup/dataA
find /usr/include/ | cpio -pdmu /avsgroup/dataA
zpool export avsgroup
sndradm -g avsgroup -nl
(Secondary)
zpool import avsgroup
zpool scrub avsgroup
zpool status # no errors
zfs create avsgroup/dataC
zfs set compression=on avsgroup/dataC
find /usr/lib | cpio -pdmu /avsgroup/dataC
zfs snapshot -r avsgroup/[EMAIL PROTECTED]
zfs destroy avsgroup/dataA
zpool export avsgroup
(Primary)
sndradm -g avsgroup -nu -r
zpool import avsgroup #failure point
$ [EMAIL PROTECTED] zpool import
pool: avsgroup
id: 17664002528387614747
state: FAULTED
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
see: http://www.sun.com/msg/ZFS-8000-5E
config:
avsgroup UNAVAIL insufficient replicas
c0t9d0 UNAVAIL corrupted data
----------------------
This message posted from opensolaris.org
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss