Hi Jim,

With or without waiting, the error of zpool import on the primary is as follow:

$ zpool import
  pool: avsg
    id: 11666376348763132304
 state: FAULTED
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
   see: http://www.sun.com/msg/ZFS-8000-5E
config:

        avsg        UNAVAIL   insufficient replicas
          c2t1d0    UNAVAIL   corrupted data

The Solaris Express build is b60.

$ more /etc/release
                           Solaris Nevada snv_60 SPARC
           Copyright 2007 Sun Microsystems, Inc.  All Rights Reserved.
                        Use is subject to license terms.
                             Assembled 12 March 2007

1. Prior to invoking the SNDR reverse update, the ZFS storage pool can be 
imported and exported on the SNDR primary?
Yes.

2. Prior to invoking the SNDR reverse update, the ZFS storage pool can be 
imported and exported on the SNDR secondary?
Yes.

3. In both cases zpool status shows no errors?
Yes.

4. What is your zpool configuration on both nodes?
Primary:

$ zpool status avsg
  pool: avsg
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        avsg        ONLINE       0     0     0
          c2t1d0    ONLINE       0     0     0

errors: No known data errors

Secondary:

$ zpool status
  pool: avsg
 state: ONLINE
 scrub: scrub completed with 0 errors on Sat Apr 21 10:50:35 2007
config:

        NAME        STATE     READ WRITE CKSUM
        avsg        ONLINE       0     0     0
          c0t1d0    ONLINE       0     0     0

errors: No known data errors

5. What is your SNDR configuration on both node?

Primary: $ sndradm -P
/dev/rdsk/c2t1d0s0      ->      avsx86:/dev/rdsk/c0t1d0s0
autosync: off, max q writes: 4096, max q fbas: 16384, async threads: 2, mode: 
sync, group: avsg, state: logging

Secondary: $ sndradm -P
/dev/rdsk/c0t1d0s0      <-      sf240:/dev/rdsk/c2t1d0s0
autosync: off, max q writes: 4096, max q fbas: 16384, async threads: 2, mode: 
sync, group: avsg, state: logging

There is another strange behavior. During the initial synchronization of the 
ZFS pool from primary to secondary, sometimes i get the following error even 
though SNDR is in logging mode and ZFS pool has been exported on the primary.

$ zpool import
  pool: avsg
    id: 8963265124130439293
 state: FAULTED
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
        devices and try again.
   see: http://www.sun.com/msg/ZFS-8000-6X
config:

        avsg        UNAVAIL   missing device
          c0t1d0    ONLINE

        Additional devices are known to be part of this pool, though their
        exact configuration cannot be determined.

What could be the problem? Can I totally rule out the possibility of transport 
layer problem? What is the cleanest way to erase a faulted ZFS pool which 
cannot be imported?


Thank you.
 
 
This message posted from opensolaris.org
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

Reply via email to