As a result of a power spike during a thunder storm I lost a sata controller 
card. This card supported my zfs pool called newsan which is a 4 x samsung 1Tb 
sata2 disk raid-z. I replaced the card and the devices have the same 
controller/disk numbers,  but now have the following issue.

-bash-3.2$ zpool status
  pool: newsan
 state: FAULTED
status: The pool metadata is corrupted and the pool cannot be opened.
action: Destroy and re-create the pool from a backup source.
   see: http://www.sun.com/msg/ZFS-8000-72
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        newsan      FAULTED      1     0     0  corrupted data
          raidz1    ONLINE       6     0     0
            c10d1   ONLINE      17     0     0
            c10d0   ONLINE      17     0     0
            c9d1    ONLINE      24     0     0
            c9d0    ONLINE      24     0     0

Something majorly weird is going on as when i run format i see this :-
-bash-3.2$ pfexec format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
       0. c3d0 <DEFAULT cyl 19454 alt 2 hd 255 sec 63>
          /[EMAIL PROTECTED],0/pci8086,[EMAIL PROTECTED],4/[EMAIL 
PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
       1. c3d1 <DEFAULT cyl 19454 alt 2 hd 255 sec 63>
          /[EMAIL PROTECTED],0/pci8086,[EMAIL PROTECTED],4/[EMAIL 
PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
       2. c9d0 <SAMSUNG-S13PJ1BQ60312-0001-31.50MB>
          /[EMAIL PROTECTED],0/pci8086,[EMAIL PROTECTED]/[EMAIL 
PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
       3. c9d1 <SAMSUNG-S13PJ1BQ60311-0001-31.50MB>
          /[EMAIL PROTECTED],0/pci8086,[EMAIL PROTECTED]/[EMAIL 
PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
       4. c10d0 <SAMSUNG-S13PJ1BQ60311-0001-31.50MB>
          /[EMAIL PROTECTED],0/pci8086,[EMAIL PROTECTED]/[EMAIL 
PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
       5. c10d1 <SAMSUNG-S13PJ1BQ60312-0001-31.50MB>
          /[EMAIL PROTECTED],0/pci8086,[EMAIL PROTECTED]/[EMAIL 
PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0

??? 31.50 MB ??? they all used to show as 1Tb i believe (or 931Mb or whatever)

Specify disk (enter its number): 2
selecting c9d0
NO Alt slice
No defect list found
[disk formatted, no defect list found]
/dev/dsk/c9d0s0 is part of active ZFS pool newsan. Please see zpool(1M).
format> p
partition> p
Current partition table (original):
Total disk sectors available: 1953503710 + 16384 (reserved sectors)

Part      Tag    Flag     First Sector          Size          Last Sector
  0        usr    wm               256       931.50GB           1953503710    
  1 unassigned    wm                 0            0                0    
  2 unassigned    wm                 0            0                0    
  3 unassigned    wm                 0            0                0    
  4 unassigned    wm                 0            0                0    
  5 unassigned    wm                 0            0                0    
  6 unassigned    wm                 0            0                0    
  8   reserved    wm        1953503711         8.00MB           1953520094 

So the partition table is looking correct.  I dont believe all 4 disks died 
concurrently. 

Any thoughts on how to recover? I dont particularly want to restore the couple 
of terabytes of data if i dont have to. 

analyze> read
Ready to analyze (won't harm SunOS). This takes a long time, 
but is interruptable with CTRL-C. Continue? y
Current Defect List must be initialized to do automatic repair.

Oh and whats this defect list thing? I havnt seen that before 

defect> print
No working list defined.
defect> create
Controller does not support creating manufacturer's defect list.
defect> extract
Ready to extract working list. This cannot be interrupted
and may take a long while. Continue? y
NO Alt slice
NO Alt slice
Extracting defect list...No defect list found
Extraction failed.
defect> commit
Ready to update Current Defect List, continue? y
Current Defect List updated, total of 0 defects.
Disk must be reformatted for changes to take effect.
analyze> read
Ready to analyze (won't harm SunOS). This takes a long time, 
but is interruptable with CTRL-C. Continue? y

        pass 0
   64386  

        pass 1
   64386  

Total of 0 defective blocks repaired.

So the read test seemed to work fine.

Any suggestions on how to proceed? Thoughts on why the disks are showing 
weirdly in format? Any way to recover/rebuild the zpool metadata?

Any help would be appreciated

Regards Rep
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to