[zfs-discuss] zpool I/O error

2010-03-19 Thread Grant Lowe
Hi all, I'm trying to delete a zpool and when I do, I get this error: # zpool destroy oradata_fs1 cannot open 'oradata_fs1': I/O error # The pools I have on this box look like this: #zpool list NAME SIZE USED AVAILCAP HEALTH ALTROOT oradata_fs1 532G 119K 532G 0%

Re: [zfs-discuss] zpool i/o error

2008-08-05 Thread Victor Pajor
I found out what was my problem. It's hardware related. My two disks where on a SCSI channel that didn't work properly. It wasn't a ZFS problem. Thank you everybody who replied. My Bad. This message posted from opensolaris.org ___ zfs-discuss

Re: [zfs-discuss] zpool i/o error

2008-07-05 Thread Victor Pajor
Booted from 2008.05 and the error was the same as before: corrupted data for both last disks. zdb -l was the same as before: read label from disk 1 but not from disks 2 3. This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] zpool i/o error

2008-07-02 Thread Bryan Wagoner
Can you try just deleting the zpool.cache file and let it rebuild on import? I would guess a listing of your old devices were in there when the system came back up with new stuff. The OS stayed the same. This message posted from opensolaris.org ___

Re: [zfs-discuss] zpool i/o error

2008-07-02 Thread Victor Pajor
# rm /etc/zfs/zpool.cache # zpool import pool: zfs id: 3801622416844369872 state: FAULTED status: One or more devices contains corrupted data. action: The pool cannot be imported due to damaged devices or data. The pool may be active on on another system, but can be imported using the '-f' flag.

Re: [zfs-discuss] zpool i/o error

2008-07-02 Thread Bryan Wagoner
I'll have to do some thunkin' on this. We just need to get back one of the disks, both would be great, but one more would do the trick. After all other avenues have been tried, one thing that you can try is to use the 2008.05 livecd and boot into the livecd without installing the OS. Import

Re: [zfs-discuss] zpool i/o error

2008-06-30 Thread Victor Pajor
By the looks of things, I don't think that I will have any answers. So the moral of the story is (if your data is valuable): 1 - Never trust your hardware or software, unless it's fully redundant. 2 - ALWAYS have an external backup because, even in best of times, SHIT HAPPENS. This

Re: [zfs-discuss] zpool i/o error

2008-06-27 Thread Victor Pajor
Here is what I found out. AVAILABLE DISK SELECTIONS: 0. c5t0d0 DEFAULT cyl 4424 alt 2 hd 255 sec 63 /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci10f1,[EMAIL PROTECTED]/[EMAIL PROTECTED],0 1. c5t1d0 SEAGATE-ST336754LW-0005-34.18GB /[EMAIL

Re: [zfs-discuss] zpool i/o error

2008-06-25 Thread Victor Pajor
When I mean about the error is: Where a system crashes, zfs just loses its references and thinks that disks are not available. When in fact the same disk worked perfectly just before the motherboard crash. Not just asking. Isn't ZFS supposed to cope with this kind of crash ? There must be a

Re: [zfs-discuss] zpool i/o error

2008-06-25 Thread Richard Elling
Victor Pajor wrote: When I mean about the error is: Where a system crashes, zfs just loses its references and thinks that disks are not available. When in fact the same disk worked perfectly just before the motherboard crash. Not just asking. Isn't ZFS supposed to cope with this kind of

Re: [zfs-discuss] zpool i/o error

2008-06-24 Thread Victor Pajor
# zpool export zfs cannot open 'zfs': no such pool any command other than zpool import will give connot open 'zfs': no such pool I can't seem to find any useful information on this type of error. Did anyone have this kind of problem ? This message posted from opensolaris.org

Re: [zfs-discuss] zpool i/o error

2008-06-22 Thread Tomas Ă–gren
On 21 June, 2008 - Victor Pajor sent me these 0,9K bytes: Another thing config: zfs FAULTED corrupted data raidz1ONLINE c1t1d0 ONLINE c7t0d0 UNAVAIL corrupted data c7t1d0 UNAVAIL corrupted data c70d0 c71d0

Re: [zfs-discuss] zpool i/o error

2008-06-21 Thread Richard Elling
Victor Pajor wrote: System description: 1 root UFS with Solaris 10U5 x86 1 raidz pool with 3 disks: (c6t1d0s0, c7t0d0s0, c7t1d0s0) Description: Just before the death of my motherboard, I've installed OpenSolaris 2008.05 - x86. Why do you ask, because I needed to test that it was the

Re: [zfs-discuss] zpool i/o error

2008-06-21 Thread Victor Pajor
Thank you for your fast reply. You where right. There is something else wrong. # zpool import pool: zfs id: 3801622416844369872 state: FAULTED status: One or more devices contains corrupted data. action: The pool cannot be imported due to damaged devices or data. The pool may be

Re: [zfs-discuss] zpool i/o error

2008-06-21 Thread Victor Pajor
Another thing config: zfs FAULTED corrupted data raidz1ONLINE c1t1d0 ONLINE c7t0d0 UNAVAIL corrupted data c7t1d0 UNAVAIL corrupted data c70d0 c71d0 don't exist, it's normal. they are c2t0d0 c2t1d0 AVAILABLE DISK

[zfs-discuss] zpool i/o error

2008-06-20 Thread Victor Pajor
System description: 1 root UFS with Solaris 10U5 x86 1 raidz pool with 3 disks: (c6t1d0s0, c7t0d0s0, c7t1d0s0) Description: Just before the death of my motherboard, I've installed OpenSolaris 2008.05 - x86. Why do you ask, because I needed to test that it was the motherboard dying and not any