Re: [zfs-discuss] zpool i/o error

2008-08-05 Thread Victor Pajor
I found out what was my problem.
It's hardware related. My two disks where on a SCSI channel that didn't work 
properly.
It wasn't a ZFS problem.
Thank you everybody who replied.

My Bad.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool i/o error

2008-07-05 Thread Victor Pajor
Booted from 2008.05
and the error was the same as before: corrupted data for both last disks.

zdb -l was the same as before: read label from disk 1 but not from disks 2  3.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool i/o error

2008-07-02 Thread Victor Pajor
# rm /etc/zfs/zpool.cache
# zpool import
pool: zfs
id: 3801622416844369872
state: FAULTED
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on on another system, but can be imported using
the '-f' flag.
see: http://www.sun.com/msg/ZFS-8000-5E
config:

zfs FAULTED corrupted data
raidz1 ONLINE
c5t1d0 ONLINE
c7t0d0 UNAVAIL corrupted data
c7t1d0 UNAVAIL corrupted data
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool i/o error

2008-06-30 Thread Victor Pajor
By the looks of things, I don't think that I will have any answers.

So the moral of the story is (if your data is valuable):
1 - Never trust your hardware or software, unless it's fully redundant.
2 - ALWAYS have an external backup 

because, even in best of times, SHIT HAPPENS.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool i/o error

2008-06-27 Thread Victor Pajor
Here is what I found out.

AVAILABLE DISK SELECTIONS:
   0. c5t0d0 DEFAULT cyl 4424 alt 2 hd 255 sec 63
  /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci10f1,[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0
   1. c5t1d0 SEAGATE-ST336754LW-0005-34.18GB
  /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci10f1,[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0
   2. c6t0d0 SEAGATE-ST336753LW-0005-34.18GB
  /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci10f1,[EMAIL 
PROTECTED],1/[EMAIL PROTECTED],0
   3. c6t1d0 SEAGATE-ST336753LW-HPS2-33.92GB
  /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci10f1,[EMAIL 
PROTECTED],1/[EMAIL PROTECTED],0
   4. c7t0d0 DEFAULT cyl 8921 alt 2 hd 255 sec 63
  /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci9005,[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0
   5. c7t1d0 DEFAULT cyl 8921 alt 2 hd 255 sec 63
  /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci9005,[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0

Created a little script

#!/bin/bash
for i in $( ls /dev/rdsk ); do
   echo $i
   zdb -l /dev/rdsk/$i
done

Here is some output of this command:

(1) #zdb -l /dev/rdsk/c5t1d0s0


LABEL 0

version=4
name='zfs'
state=0
txg=7855332
pool_guid=3801622416844369872
hostid=345240675
hostname='sun'
top_guid=4004063599069763239
guid=4086156223654637831
vdev_tree
type='raidz'
id=0
guid=4004063599069763239
nparity=1
metaslab_array=13
metaslab_shift=30
ashift=9
asize=109220462592
is_log=0
children[0]
type='disk'
id=0
guid=4086156223654637831
path='/dev/dsk/c6t1d0s0'
devid='id1,[EMAIL PROTECTED]/a'
phys_path='/[EMAIL PROTECTED],0/pci1022,[EMAIL 
PROTECTED]/pci10f1,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:a'
whole_disk=1
DTL=69
children[1]
type='disk'
id=1
guid=13320021127057678234
path='/dev/dsk/c7t0d0s0'
devid='id1,[EMAIL PROTECTED]/a'
phys_path='/[EMAIL PROTECTED],0/pci1022,[EMAIL 
PROTECTED]/pci10f1,[EMAIL PROTECTED],1/[EMAIL PROTECTED],0:a'
whole_disk=1
DTL=68
children[2]
type='disk'
id=2
guid=5212524563381
path='/dev/dsk/c7t1d0s0'
devid='id1,[EMAIL PROTECTED]/a'
phys_path='/[EMAIL PROTECTED],0/pci1022,[EMAIL 
PROTECTED]/pci10f1,[EMAIL PROTECTED],1/[EMAIL PROTECTED],0:a'
whole_disk=1
DTL=22
...
LABEL 1  LABEL 2  LABEL 3 are omitted for clarity


(2) #zdb -l /dev/rdsk/c6t1d0s0
(3) #zdb -l /dev/rdsk/c7t0d0s0
(4) #zdb -l /dev/rdsk/c7t1d0s0

all three commands give this:

LABEL 0

failed to unpack label 0

LABEL 1

failed to unpack label 1

LABEL 2

failed to unpack label 2

LABEL 3

failed to unpack label 3



(5) #zdb -l /dev/rdsk/c7t0d0p0


LABEL 0

version=4
name='data'
state=0
txg=2333244
pool_guid=18349152765965118757
hostid=409943152
hostname='opensolaris'
top_guid=4131806235391152254
guid=13715042150527401204
vdev_tree
type='mirror'
id=0
guid=4131806235391152254
metaslab_array=14
metaslab_shift=29
ashift=9
asize=73402941440
is_log=0
children[0]
type='disk'
id=0
guid=4088711380714589637
path='/dev/dsk/c7t1d0p0'
devid='id1,[EMAIL PROTECTED]/q'
phys_path='/[EMAIL PROTECTED],0/pci1022,[EMAIL 
PROTECTED]/pci9005,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:q'
whole_disk=0
children[1]
type='disk'
id=1
guid=13715042150527401204
path='/dev/dsk/c7t0d0p0'
devid='id1,[EMAIL PROTECTED]/q'
phys_path='/[EMAIL PROTECTED],0/pci1022,[EMAIL 
PROTECTED]/pci9005,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:q'
whole_disk=0

LABEL 1  LABEL 2  LABEL 3 are omitted for clarity

Now here's my question.
when executing command (1), isn't children[0]'s path supposed to be 
/dev/rdsk/c5t1d0s0 ?
and not /dev/dsk/c6t1d0s0.

children[1]  children[1] are also not in synch. They refer to disks that are 
used by another 

Re: [zfs-discuss] zpool i/o error

2008-06-25 Thread Victor Pajor
When I mean about the error is:

Where a system crashes, zfs just loses its references and thinks that disks are 
not available.
When in fact the same disk worked perfectly just before the motherboard crash.

Not just asking. Isn't ZFS supposed to cope with this kind of crash ?

There must be a way of diagnosing what is going on.

Here is the description of the label when I change configuration of the hdd.
What I did is just add another SCSI controller and added 2 disks.


bash-3.00# zdb -l /dev/rdsk/c4t1d0s0

LABEL 0

   version=4
   name='zfs'
   state=0
   txg=7855332
   pool_guid=3801622416844369872
   hostid=345240675
   hostname='sun'
   top_guid=4004063599069763239
   guid=4086156223654637831
   vdev_tree
   type='raidz'
   id=0
   guid=4004063599069763239
   nparity=1
   metaslab_array=13
   metaslab_shift=30
   ashift=9
   asize=109220462592
   is_log=0
   children[0]
   type='disk'
   id=0
   guid=4086156223654637831
   path='/dev/dsk/c6t1d0s0'
   devid='id1,[EMAIL PROTECTED]/a'
   phys_path='/[EMAIL PROTECTED],0/pci1022,[EMAIL 
PROTECTED]/pci10f1,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:a'
   whole_disk=1
   DTL=69
   children[1]
   type='disk'
   id=1
   guid=13320021127057678234
   path='/dev/dsk/c7t0d0s0'
   devid='id1,[EMAIL PROTECTED]/a'
   phys_path='/[EMAIL PROTECTED],0/pci1022,[EMAIL 
PROTECTED]/pci10f1,[EMAIL PROTECTED],1/[EMAIL PROTECTED],0:a'
   whole_disk=1
   DTL=68
   children[2]
   type='disk'
   id=2
   guid=5212524563381
   path='/dev/dsk/c7t1d0s0'
   devid='id1,[EMAIL PROTECTED]/a'
   phys_path='/[EMAIL PROTECTED],0/pci1022,[EMAIL 
PROTECTED]/pci10f1,[EMAIL PROTECTED],1/[EMAIL PROTECTED],0:a'
   whole_disk=1
   DTL=22

LABEL 1

   version=4
   name='zfs'
   state=0
   txg=7855332
   pool_guid=3801622416844369872
   hostid=345240675
   hostname='sun'
   top_guid=4004063599069763239
   guid=4086156223654637831
   vdev_tree
   type='raidz'
   id=0
   guid=4004063599069763239
   nparity=1
   metaslab_array=13
   metaslab_shift=30
   ashift=9
   asize=109220462592
   is_log=0
   children[0]
   type='disk'
   id=0
   guid=4086156223654637831
   path='/dev/dsk/c6t1d0s0'
   devid='id1,[EMAIL PROTECTED]/a'
   phys_path='/[EMAIL PROTECTED],0/pci1022,[EMAIL 
PROTECTED]/pci10f1,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:a'
   whole_disk=1
   DTL=69
   children[1]
   type='disk'
   id=1
   guid=13320021127057678234
   path='/dev/dsk/c7t0d0s0'
   devid='id1,[EMAIL PROTECTED]/a'
   phys_path='/[EMAIL PROTECTED],0/pci1022,[EMAIL 
PROTECTED]/pci10f1,[EMAIL PROTECTED],1/[EMAIL PROTECTED],0:a'
   whole_disk=1
   DTL=68
   children[2]
   type='disk'
   id=2
   guid=5212524563381
   path='/dev/dsk/c7t1d0s0'
   devid='id1,[EMAIL PROTECTED]/a'
   phys_path='/[EMAIL PROTECTED],0/pci1022,[EMAIL 
PROTECTED]/pci10f1,[EMAIL PROTECTED],1/[EMAIL PROTECTED],0:a'
   whole_disk=1
   DTL=22

LABEL 2

   version=4
   name='zfs'
   state=0
   txg=7855332
   pool_guid=3801622416844369872
   hostid=345240675
   hostname='sun'
   top_guid=4004063599069763239
   guid=4086156223654637831
   vdev_tree
   type='raidz'
   id=0
   guid=4004063599069763239
   nparity=1
   metaslab_array=13
   metaslab_shift=30
   ashift=9
   asize=109220462592
   is_log=0
   children[0]
   type='disk'
   id=0
   guid=4086156223654637831
   path='/dev/dsk/c6t1d0s0'
   devid='id1,[EMAIL PROTECTED]/a'
   phys_path='/[EMAIL PROTECTED],0/pci1022,[EMAIL 
PROTECTED]/pci10f1,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:a'
   whole_disk=1
   DTL=69
   children[1]
   type='disk'
   id=1
   guid=13320021127057678234
   path='/dev/dsk/c7t0d0s0'
   devid='id1,[EMAIL PROTECTED]/a'
   phys_path='/[EMAIL PROTECTED],0/pci1022,[EMAIL 
PROTECTED]/pci10f1,[EMAIL PROTECTED],1/[EMAIL PROTECTED],0:a'
   whole_disk=1
   DTL=68
   children[2]
   type='disk'
   id=2
   

Re: [zfs-discuss] zpool i/o error

2008-06-24 Thread Victor Pajor
# zpool export zfs
cannot open 'zfs': no such pool

any command other than zpool import will give connot open 'zfs': no such pool


I can't seem to find any useful information on this type of error.

Did anyone have this kind of problem ?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool i/o error

2008-06-21 Thread Victor Pajor
Thank you for your fast reply.
You where right. There is something else wrong.

# zpool import
  pool: zfs
id: 3801622416844369872
 state: FAULTED
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on on another system, but can be imported using
the '-f' flag.
   see: http://www.sun.com/msg/ZFS-8000-5E
config:

zfs FAULTED   corrupted data
  raidz1ONLINE
c1t1d0  ONLINE
c7t0d0  UNAVAIL   corrupted data
c7t1d0  UNAVAIL   corrupted data


NOW !!! How can that be ? It was running ok before I changed the motheboard.
Right before changing it, the system crashed, zfs isn't supposed to handle this 
?
How can I get the data back. Or diagnose further the problem ?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool i/o error

2008-06-21 Thread Victor Pajor
Another thing

config:

zfs FAULTED   corrupted data
  raidz1ONLINE
c1t1d0  ONLINE
c7t0d0  UNAVAIL   corrupted data
c7t1d0  UNAVAIL   corrupted data

c70d0  c71d0 don't exist, it's normal. they are c2t0d0  c2t1d0

AVAILABLE DISK SELECTIONS:
   0. c1t0d0 DEFAULT cyl 4424 alt 2 hd 255 sec 63
  /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci10f1,[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0
   1. c1t1d0 SEAGATE-ST336754LW-0005-34.18GB
  /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci10f1,[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0
   2. c2t0d0 SEAGATE-ST336753LW-0005-34.18GB
  /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci10f1,[EMAIL 
PROTECTED],1/[EMAIL PROTECTED],0
   3. c2t1d0 SEAGATE-ST336753LW-HPS2-33.92GB
  /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci10f1,[EMAIL 
PROTECTED],1/[EMAIL PROTECTED],0
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool i/o error

2008-06-20 Thread Victor Pajor
System description:
1 root UFS with Solaris 10U5 x86
1 raidz pool with 3 disks: (c6t1d0s0, c7t0d0s0, c7t1d0s0)

Description:
Just before the death of my motherboard, I've installed OpenSolaris 2008.05 - 
x86. 
Why do you ask, because I needed to test that it was the motherboard dying and 
not any other hardware/software.
That in mind, I've replaced my motherboard and since OpenSolais was already 
installed, I've decided to give it a try.

#zpool import -f zfs
i/o error

The only thing that I've noticed is that my device ids changed from c6t1d0s0 to 
c4t1d0s0.

So I've decided to switch back to Solaris 10U5, but the same thing happens.
i/o error

Since I know that my disks are operational and hadn't been accessed since the 
board replacement, I assume that my data is still available but not seen by zfs 
because ids have changed.

Can someone please tell me how can I get back my data ?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss