Sure enough Cindy, the eSATA cables had been crossed. I exported, powered off,
reversed the cables, booted, imported, and the pool is currently resilvering
with both c5t0d0 & c5t1d0 present in the mirror. :) Thank you!!
Alex
On May 24, 2011, at 9:58 AM, Cindy Swearingen wrote:
> Hi Alex,
>
Hi Alex,
If the hardware and cables were moved around then this is probably
the root cause of your problem. You should see if you can move the
devices/cabling back to what they were before the move.
The zpool history output provides the original device name, which
isn't c5t1d0, either:
# zpool
Hi Cindy,
Thanks for the advice. This is just a little old Gateway PC provisioned as an
informal workgroup server. The main storage is two SATA drives in an external
enclosure, connected to a Sil3132 PCIe eSATA controller. The OS is snv_134b,
upgraded from snv_111a.
I can't identify a cause in
Hi Alex
More scary than interesting to me.
What kind of hardware and which Solaris release?
Do you know what steps lead up to this problem? Any recent hardware
changes?
This output should tell you which disks were in this pool originally:
# zpool history tank
If the history identifies tank's
Just a random thought: if two devices have same IDs and seem to work in
turns,
are you certain you have a mirror and not two paths to the same backend?
A few years back I was given to support a box with "sporadically failing
drives"
which turned out to be two paths to the same external array, a
I thought this was interesting - it looks like we have a failing drive in our
mirror, but the two device nodes in the mirror are the same:
pool: tank
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for th