Re: [zfs-discuss] Persistent errors?

2012-06-22 Thread Roy Sigurd Karlsbakk
 It seems as though every time I scrub my mirror I get a few megabytes
 of checksum errors on one disk (luckily corrected by the other). Is
 there some way of tracking down a problem which might be persistent?

Check iostat -en or iostat -En devname. If the latter shows media errors, the 
drive is dying, and should be replaced.

 I wonder if it's anything to do with these messages which are
 constantly appearing on the console:
 
 Jun 17 12:06:18 sunny scsi: [ID 107833 kern.warning] WARNING:
 /pci@0,0/pci1000,8000@16/sd@0,0 (sd2):
 Jun 17 12:06:18 sunny SYNCHRONIZE CACHE command failed (5)
 
 I've no idea what they are about (this is on Solaris 11 btw).

No idea, sorry…

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 98013356
r...@karlsbakk.net
http://blogg.karlsbakk.net/
GPG Public key: http://karlsbakk.net/roysigurdkarlsbakk.pubkey.txt
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med xenotyp etymologi. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] files with permanent errors

2012-06-22 Thread rondzierwa
I am running ZFS filesystem version 4 and storage pool version 15 on a FreeBSD 
8.2-Release-amd64 kernel. I have a single 12TB pool based on a 3ware 9650 
controller with 8 seagate ST2000DL003 drives in a raid-5 configuration managed 
by the controller. 

I recently had a connector problem on a disk in the array while running a 
performance test that was writing a 1TB pattern file to the array. When the 
raid controller started reporting errors I stopped the test and re-seated the 
connector on the drive. After running a verify on the raid, I tried to read the 
partial pattern file and ZFS produced copious amounts of checksum error 
messages on the system console. So, I rm'ed the file, and got even more 
checksum errors interspersed with several I/O error 86 messages. Since the rm, 
ls no longer shows the file, but I did a scrub just to be sure the bogus file 
was gone, and got tons of checksum and i/o 86 errors. At the end, zpool status 
shows: 

phoenix# zpool status -v zfsPool 
pool: zfsPool 
state: ONLINE 
status: One or more devices has experienced an error resulting in data 
corruption. Applications may be affected. 
action: Restore the file in question if possible. Otherwise restore the 
entire pool from backup. 
see: http://www.sun.com/msg/ZFS-8000-8A 
scrub: scrub completed after 3h40m with 6353 errors on Fri Jun 22 08:36:36 2012 
config: 

NAME STATE READ WRITE CKSUM 
zfsPool ONLINE 0 0 6.20K 
da0 ONLINE 0 0 12.4K 

errors: Permanent errors have been detected in the following files: 

zfsPool/raid:0x9e241 


I have tried zpool clear/reboot/zpool scrub several times now, and get a 
similar set of errors and results. 

My question is - How do I get rid of this file? It is no longer linked to a 
directory entry, and there shouldn't be anybody with it open since I have 
rebooted several times. yet, zfs still tells me there's a broken file and I 
should replace it. It is most likely the pattern test file that I deleted, so I 
don't need it and I don't want to recover it. i would just like to get rid of 
it and get my filesystem clean again without resorting to starting over. 


thanks, 
ron. 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Persistent errors?

2012-06-22 Thread Brandon High
On Mon, Jun 18, 2012 at 3:55 PM, sol a...@yahoo.com wrote:
 It seems as though every time I scrub my mirror I get a few megabytes of
 checksum errors on one disk (luckily corrected by the other). Is there some
 way of tracking down a problem which might be persistent?

Check the output of 'fmdump -eV', it should have some (rather
extensive) information.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss