I have a system that took a RAID6 hardware array and created a ZFS pool on top 
of it (pool only has one device in it which is the entire RAID6 HW array).  A 
few weeks ago, the Sun v440 somehow got completely wrapped around the axle and 
the operating system had to be rebuilt.  Once the system was rebuilt, I did a 
zfs import on the pool. (BTW, I didn't build the system....just a engineer 
trying to help out....)

Doing a zpool status -v, I saw some files that were damaged.  The issue I am 
seeing now is that when I delete the damaged files in question, this is how it 
shows up in the output from zpool status -v

pool1/data1:<0xba2c7>

So my zpool status -v is still showing 1958 errors but instead of showing the 
paths to the files, I am seeing similar messages to the one above for the files 
that I deleted.

Other that rebuilding the pool from scratch (which might happen), is there any 
way to get rid of the this error?  It doesn't look like any new errors are 
occurring, just the original damaged files from when the system died.  I 
thought about running a scrub but I don't know if that will do much since it 
isn't a mirror or raidz.

Any ideas would be great
Thanks!
Chris

PS: I know this array probably should of been built treating the disks as a 
JBOD and have ZFS do the raiding.  Unfortunately, nobody asked me when it was 
built :)
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to