I'm running a small ZFS pool on a cheap Intel motherboard. Apparently one of 
the plugs has gone bad on my motherboard and it won't connect anymore:

so...@sbox:~$ uname -a 
SunOS sbox 5.11 snv_123 i86pc i386 i86pc Solaris

so...@sbox:~$ zpool status
        NAME        STATE     READ WRITE CKSUM
        tank        DEGRADED     0     0     0
          raidz1    DEGRADED     0     0     0
            c4t1d0  ONLINE       0     0     0
            c4t2d0  ONLINE       0     0     0
            c4t3d0  ONLINE       0     0     0
            c4t4d0  UNAVAIL      0     0     0  cannot open

No big deal - I've verified that the drive is still working, and I've moved 
that drive to a different SATA plug so that it's visible again:

so...@sbox:~$ pfexec format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
       0. c0t0d0 <DEFAULT cyl 9726 alt 2 hd 255 sec 63>
          /p...@0,0/pci8086,5...@1f,2/d...@0,0
       1. c0t1d0 <ATA-WDC WD7500AACS-0-1B01-698.64GB>
          /p...@0,0/pci8086,5...@1f,2/d...@1,0
       2. c0t2d0 <ATA-WDC WD7500AACS-0-1B01-698.64GB>
          /p...@0,0/pci8086,5...@1f,2/d...@2,0
       3. c0t3d0 <ATA-WDC WD7500AACS-0-1B01-698.64GB>
          /p...@0,0/pci8086,5...@1f,2/d...@3,0
       4. c0t5d0 <ATA-WDC WD7500AACS-0-1B01-698.64GB>
          /p...@0,0/pci8086,5...@1f,2/d...@5,0

In this case the drive which was inaccessible as c0t4d0 is now at c0t5d0.

Unfortunately I can't get this drive to reconnect. I've tried replacing it with 
itself, but it's telling me that I can't do this because it's part of an active 
ZFS pool already:

so...@sbox:~$ pfexec zpool replace -f tank c0t5d0
invalid vdev specification
the following errors must be manually repaired:
/dev/dsk/c0t5d0s0 is part of active ZFS pool tank. Please see zpool(1M).

so...@sbox:~$ pfexec zpool replace -f tank c4t4d0 c0t5d0
invalid vdev specification
the following errors must be manually repaired:
/dev/dsk/c0t5d0s0 is part of active ZFS pool tank. Please see zpool(1M).

I'd love to know what I can do to bring my pool back online.

While I'm at it I'm also curious why 'format' says that my drives are at c0 but 
ZFS thinks they're all at c4.
-- 
This message posted from opensolaris.org
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

Reply via email to