Joseph,
Thanks for the reply. I come back to work on AVS today. Here are
the latest results and some thoughts.
1) When changes are made to the secondary,
a. SNDR is in logging mode
b. primary is unmounted using zpool export
c. secondary is mounted using zpool import
2) Data changes are made on secondary. The data will be
synchronized back to the primary later.
3) After I export the ZFS at secondary, SNDR is put in reverse
synchronization mode. I assume it takes some time to do the update,
which I constantly check using dsstat.
You do not need to wait until the update completes. SNDR supports on-
demand data access while the reverse update is in progress, so if the
data block had changed while the volume was imported on the SNDR
secondary, SNDR will know these because bitmaps were exchanged as
part of initiating the reverse update, thus SNDR on the primary node
will request any changed data blocks from the secondary on demand.
4) I wait until dsstat reports no activity. Assuming zpool import
will receive corrupted data if the synchronization is still in
place, I manually place SNDR into logging mode before issuing zpool
import.
You do not need to wait.
What does "zpool import" report, when done without specifying the
zpool name?
5) Unfortunately corruption still occurs which lead me to two
possible reasons
a. data corruption on the network transfer
b. data incompatibility on the ZFS
What build or baselevel of Solaris are you using?
What is the exact text of the failure you are seeing?
Prior to invoking the SNDR reverse update, the ZFS storage pool can
be imported and exported on the SNDR primary?
Prior to invoking the SNDR reverse update, the ZFS storage pool can
be imported and exported on the SNDR secondary?
In both cases zpool status shows no errors?
What is your zpool configuration on both nodes?
What is your SNDR configuration on both node?
Here are my questions,
1. Does AVS ensure data integrity during replication?
AVS uses TCP/IP, which is the only level of data integrity. I have
never seen data errors in the transport layer, and when fault
injection is done on the transport, TCP/IP detect these error and
replication terminates in error, prior to writing bad data to disk.
2. Does AVS need to convert data from x86 to SPARC during the
replication?
AVS is both file system and volume manager agnostic, which means that
it never looks at, or touches the actual data it is replicating. With
the exception of ZFS, which is endian-neutral, replication of volumes
between x86 and SPARC systems causes problem for other filesystems,
and applications. This is the same issue, of having SAN or shared
storage, and trying to use the same volume data on either x86 or
SPARC systems.
Regards,
Joseph
This message posted from opensolaris.org
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss