Hi Cindy,
Not sure exactly when the drives went into this state, but it is likely that it
happened when I added a second pool, added the same spares to the second pool,
then later destroyed the second pool. There have been no controller or any
other hardware changes to this system - it is all
Hi Ryan,
Which Solaris release is this?
Thanks,
Cindy
On 07/09/10 10:38, Ryan Schwartz wrote:
Hi Cindy,
Not sure exactly when the drives went into this state, but it is likely that it
happened when I added a second pool, added the same spares to the second pool,
then later destroyed the
Ok, so after removing the spares marked as AVAIL and re-adding them again, I
put myself back in the you're effed, dude boat. What I should have done at
that point is a zpool export/import at that point which would have resolved it.
So what I did was recreate the steps that got me into the state
Cindy,
[IDGSUN02:/] root# cat /etc/release
Solaris 10 10/08 s10x_u6wos_07b X86
Copyright 2008 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 27 October 2008
But as
I was going to suggest the export/import step next. :-)
I'm glad you were able to resolve it.
We are working on making spare behavior more robust.
In the meantime, my advice is keep life simple and do not share spares,
logs, caches, or even disks between pools.
Thanks,
Cindy
On 07/09/10
I've got an x4500 with a zpool in a weird state. The two spares are listed
twice each, once as AVAIL, and once as FAULTED.
[IDGSUN02:/opt/src] root# zpool status
pool: idgsun02
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
idgsun02ONLINE
Hi Ryan,
What events lead up to this situation? I've seen a similar problem when
a system upgrade caused the controller numbers of the spares to change.
In that case, the workaround was to export the pool, correct the spare
device names, and import the pool. I'm not sure if this workaround