Sincerest apologies! I thought i was putting sufficient output information
without spamming my entire putty screens ;)
Well something else strange happened. Rewinding to the beginning, after i had
things working fine and 180GB (~112k files) of data copied over, for fun (a few
hours before all these errors started occurring, perhaps a catalyst?) i
switched the cable-ordering in the back to test if things would work as
advertised (this is just a test box before we put all our data on it so data
isn't critical...yet). Upon boot-up i did a scrub which returned a perfectly
normal, healthy status.
Original cable layout: c3d0, c3d1, c4d0, c4d1.
After cable switcheroo: c3d0, c4d0, c4d1, c3d1
I was about to pop out the drive, but lest i pull the wrong one, figured i
would switch the order back to normal first, but check this out:
# zpool status -v
pool: jade
state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
repaired.
scrub: resilver completed after 0h0m with 0 errors on Thu Mar 27 14:04:54 2008
config:
NAME STATE READ WRITE CKSUM
jade DEGRADED 0 0 0
raidz1 DEGRADED 0 0 0
c3d0 ONLINE 0 0 0
c4d0 ONLINE 0 0 0
c4d0 FAULTED 0 0 0 too many errors
c4d1 ONLINE 0 0 0
errors: No known data errors
So i have [i]TWO[/i] c4d0 drives?? Is this possible? Something is confused. I
think i'm going to destroy this array completely and start from scratch. I've
apparently fudged too many things and lost track of what is going on. I can try
and re-create the error scenario and document my progress if it will help
somebody (perhaps i've found a bug?). I still have all the files on the Windows
box that caused all this weirdness in the first place.
btw if anyone wants to look at this in detail, i can provide putty access :)
This message posted from opensolaris.org
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss