I hope this is the correct place for this question as I think it was ZFS
that saved me.
A little while ago I did some thing very silly in a moment of non
I meant to use dd to copy an image (around 500Mb) to a USB disk but instead
wrote over a non raided zfs disk!
This of course stopped the zpool from operating, I fiddled around trying to
repair the problem and ended up having to reboot the server as I had ended
up with many hung process trying to read the disk.
I am using Solaris 11, in my x86 HP Microserver n36l with sata disks.
Below is various status output's: My question is how after a reboot did the
disk / zpool recover with from what I have seen so far no corruption at all?
root@n36l:/export/home/drowl# zpool status -x
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool
scan: none requested
NAME STATE READ WRITE CKSUM
data2 UNAVAIL 0 0 0 experienced I/O failures
c2t0d0s0 UNAVAIL 0 0 0 experienced I/O failures
I guessing from the below its the label / partition that is busted...
root@n36l:/export/home/drowl# prtvtoc /dev/rdsk/c2t0d0s2
prtvtoc: /dev/rdsk/c2t0d0s2: Unable to read Disk geometry errno = 0x5
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c2t0d0 <drive type unknown>
Specify disk (enter its number): 0
Error: can't open disk '/dev/rdsk/c2t0d0p0'.
AVAILABLE DRIVE TYPES:
0. Auto configure
19. ATA -Hitachi HDT7210-A3AA
Specify disk type (enter its number):
Any help most welcomed,
<--Time flies like an arrow; fruit flies like a banana. -->
zfs-discuss mailing list