[zfs-discuss] zpool I/O error

2010-03-19 Thread Grant Lowe
Hi all,

I'm trying to delete a zpool and when I do, I get this error:

# zpool destroy oradata_fs1
cannot open 'oradata_fs1': I/O error
# 

The pools I have on this box look like this:

#zpool list
NAME  SIZE   USED  AVAILCAP  HEALTH  ALTROOT
oradata_fs1   532G   119K   532G 0%  DEGRADED  -
rpool 136G  28.6G   107G21%  ONLINE  -
#

Why can't I delete this pool? This is on Solaris 10 5/09 s10s_u7.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool i/o error

2008-08-05 Thread Victor Pajor
I found out what was my problem.
It's hardware related. My two disks where on a SCSI channel that didn't work 
properly.
It wasn't a ZFS problem.
Thank you everybody who replied.

My Bad.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool i/o error

2008-07-05 Thread Victor Pajor
Booted from 2008.05
and the error was the same as before: corrupted data for both last disks.

zdb -l was the same as before: read label from disk 1 but not from disks 2  3.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool i/o error

2008-07-02 Thread Bryan Wagoner
Can you try just deleting the zpool.cache file and let it rebuild on import? I 
would guess a listing of your old devices were in there when the system came 
back up with new stuff. The OS stayed the same.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool i/o error

2008-07-02 Thread Victor Pajor
# rm /etc/zfs/zpool.cache
# zpool import
pool: zfs
id: 3801622416844369872
state: FAULTED
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on on another system, but can be imported using
the '-f' flag.
see: http://www.sun.com/msg/ZFS-8000-5E
config:

zfs FAULTED corrupted data
raidz1 ONLINE
c5t1d0 ONLINE
c7t0d0 UNAVAIL corrupted data
c7t1d0 UNAVAIL corrupted data
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool i/o error

2008-07-02 Thread Bryan Wagoner
I'll have to do some thunkin' on this.  We just need to  get back one of the 
disks, both would be great, but one more would do the trick. 

After all other avenues have been tried,  one thing that you can try is to use 
the 2008.05 livecd and boot into the livecd without installing the OS. Import 
the pool and see if you have any better luck.  If not, you can try the zdb -l 
again under the livecd as there have been bugs with that in the past on older 
versions of ZFS code. 

Will edit this message if I can think of something else to try.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool i/o error

2008-06-30 Thread Victor Pajor
By the looks of things, I don't think that I will have any answers.

So the moral of the story is (if your data is valuable):
1 - Never trust your hardware or software, unless it's fully redundant.
2 - ALWAYS have an external backup 

because, even in best of times, SHIT HAPPENS.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool i/o error

2008-06-27 Thread Victor Pajor
Here is what I found out.

AVAILABLE DISK SELECTIONS:
   0. c5t0d0 DEFAULT cyl 4424 alt 2 hd 255 sec 63
  /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci10f1,[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0
   1. c5t1d0 SEAGATE-ST336754LW-0005-34.18GB
  /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci10f1,[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0
   2. c6t0d0 SEAGATE-ST336753LW-0005-34.18GB
  /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci10f1,[EMAIL 
PROTECTED],1/[EMAIL PROTECTED],0
   3. c6t1d0 SEAGATE-ST336753LW-HPS2-33.92GB
  /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci10f1,[EMAIL 
PROTECTED],1/[EMAIL PROTECTED],0
   4. c7t0d0 DEFAULT cyl 8921 alt 2 hd 255 sec 63
  /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci9005,[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0
   5. c7t1d0 DEFAULT cyl 8921 alt 2 hd 255 sec 63
  /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci9005,[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0

Created a little script

#!/bin/bash
for i in $( ls /dev/rdsk ); do
   echo $i
   zdb -l /dev/rdsk/$i
done

Here is some output of this command:

(1) #zdb -l /dev/rdsk/c5t1d0s0


LABEL 0

version=4
name='zfs'
state=0
txg=7855332
pool_guid=3801622416844369872
hostid=345240675
hostname='sun'
top_guid=4004063599069763239
guid=4086156223654637831
vdev_tree
type='raidz'
id=0
guid=4004063599069763239
nparity=1
metaslab_array=13
metaslab_shift=30
ashift=9
asize=109220462592
is_log=0
children[0]
type='disk'
id=0
guid=4086156223654637831
path='/dev/dsk/c6t1d0s0'
devid='id1,[EMAIL PROTECTED]/a'
phys_path='/[EMAIL PROTECTED],0/pci1022,[EMAIL 
PROTECTED]/pci10f1,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:a'
whole_disk=1
DTL=69
children[1]
type='disk'
id=1
guid=13320021127057678234
path='/dev/dsk/c7t0d0s0'
devid='id1,[EMAIL PROTECTED]/a'
phys_path='/[EMAIL PROTECTED],0/pci1022,[EMAIL 
PROTECTED]/pci10f1,[EMAIL PROTECTED],1/[EMAIL PROTECTED],0:a'
whole_disk=1
DTL=68
children[2]
type='disk'
id=2
guid=5212524563381
path='/dev/dsk/c7t1d0s0'
devid='id1,[EMAIL PROTECTED]/a'
phys_path='/[EMAIL PROTECTED],0/pci1022,[EMAIL 
PROTECTED]/pci10f1,[EMAIL PROTECTED],1/[EMAIL PROTECTED],0:a'
whole_disk=1
DTL=22
...
LABEL 1  LABEL 2  LABEL 3 are omitted for clarity


(2) #zdb -l /dev/rdsk/c6t1d0s0
(3) #zdb -l /dev/rdsk/c7t0d0s0
(4) #zdb -l /dev/rdsk/c7t1d0s0

all three commands give this:

LABEL 0

failed to unpack label 0

LABEL 1

failed to unpack label 1

LABEL 2

failed to unpack label 2

LABEL 3

failed to unpack label 3



(5) #zdb -l /dev/rdsk/c7t0d0p0


LABEL 0

version=4
name='data'
state=0
txg=2333244
pool_guid=18349152765965118757
hostid=409943152
hostname='opensolaris'
top_guid=4131806235391152254
guid=13715042150527401204
vdev_tree
type='mirror'
id=0
guid=4131806235391152254
metaslab_array=14
metaslab_shift=29
ashift=9
asize=73402941440
is_log=0
children[0]
type='disk'
id=0
guid=4088711380714589637
path='/dev/dsk/c7t1d0p0'
devid='id1,[EMAIL PROTECTED]/q'
phys_path='/[EMAIL PROTECTED],0/pci1022,[EMAIL 
PROTECTED]/pci9005,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:q'
whole_disk=0
children[1]
type='disk'
id=1
guid=13715042150527401204
path='/dev/dsk/c7t0d0p0'
devid='id1,[EMAIL PROTECTED]/q'
phys_path='/[EMAIL PROTECTED],0/pci1022,[EMAIL 
PROTECTED]/pci9005,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:q'
whole_disk=0

LABEL 1  LABEL 2  LABEL 3 are omitted for clarity

Now here's my question.
when executing command (1), isn't children[0]'s path supposed to be 
/dev/rdsk/c5t1d0s0 ?
and not /dev/dsk/c6t1d0s0.

children[1]  children[1] are also not in synch. They refer to disks that are 
used by another 

Re: [zfs-discuss] zpool i/o error

2008-06-25 Thread Victor Pajor
When I mean about the error is:

Where a system crashes, zfs just loses its references and thinks that disks are 
not available.
When in fact the same disk worked perfectly just before the motherboard crash.

Not just asking. Isn't ZFS supposed to cope with this kind of crash ?

There must be a way of diagnosing what is going on.

Here is the description of the label when I change configuration of the hdd.
What I did is just add another SCSI controller and added 2 disks.


bash-3.00# zdb -l /dev/rdsk/c4t1d0s0

LABEL 0

   version=4
   name='zfs'
   state=0
   txg=7855332
   pool_guid=3801622416844369872
   hostid=345240675
   hostname='sun'
   top_guid=4004063599069763239
   guid=4086156223654637831
   vdev_tree
   type='raidz'
   id=0
   guid=4004063599069763239
   nparity=1
   metaslab_array=13
   metaslab_shift=30
   ashift=9
   asize=109220462592
   is_log=0
   children[0]
   type='disk'
   id=0
   guid=4086156223654637831
   path='/dev/dsk/c6t1d0s0'
   devid='id1,[EMAIL PROTECTED]/a'
   phys_path='/[EMAIL PROTECTED],0/pci1022,[EMAIL 
PROTECTED]/pci10f1,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:a'
   whole_disk=1
   DTL=69
   children[1]
   type='disk'
   id=1
   guid=13320021127057678234
   path='/dev/dsk/c7t0d0s0'
   devid='id1,[EMAIL PROTECTED]/a'
   phys_path='/[EMAIL PROTECTED],0/pci1022,[EMAIL 
PROTECTED]/pci10f1,[EMAIL PROTECTED],1/[EMAIL PROTECTED],0:a'
   whole_disk=1
   DTL=68
   children[2]
   type='disk'
   id=2
   guid=5212524563381
   path='/dev/dsk/c7t1d0s0'
   devid='id1,[EMAIL PROTECTED]/a'
   phys_path='/[EMAIL PROTECTED],0/pci1022,[EMAIL 
PROTECTED]/pci10f1,[EMAIL PROTECTED],1/[EMAIL PROTECTED],0:a'
   whole_disk=1
   DTL=22

LABEL 1

   version=4
   name='zfs'
   state=0
   txg=7855332
   pool_guid=3801622416844369872
   hostid=345240675
   hostname='sun'
   top_guid=4004063599069763239
   guid=4086156223654637831
   vdev_tree
   type='raidz'
   id=0
   guid=4004063599069763239
   nparity=1
   metaslab_array=13
   metaslab_shift=30
   ashift=9
   asize=109220462592
   is_log=0
   children[0]
   type='disk'
   id=0
   guid=4086156223654637831
   path='/dev/dsk/c6t1d0s0'
   devid='id1,[EMAIL PROTECTED]/a'
   phys_path='/[EMAIL PROTECTED],0/pci1022,[EMAIL 
PROTECTED]/pci10f1,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:a'
   whole_disk=1
   DTL=69
   children[1]
   type='disk'
   id=1
   guid=13320021127057678234
   path='/dev/dsk/c7t0d0s0'
   devid='id1,[EMAIL PROTECTED]/a'
   phys_path='/[EMAIL PROTECTED],0/pci1022,[EMAIL 
PROTECTED]/pci10f1,[EMAIL PROTECTED],1/[EMAIL PROTECTED],0:a'
   whole_disk=1
   DTL=68
   children[2]
   type='disk'
   id=2
   guid=5212524563381
   path='/dev/dsk/c7t1d0s0'
   devid='id1,[EMAIL PROTECTED]/a'
   phys_path='/[EMAIL PROTECTED],0/pci1022,[EMAIL 
PROTECTED]/pci10f1,[EMAIL PROTECTED],1/[EMAIL PROTECTED],0:a'
   whole_disk=1
   DTL=22

LABEL 2

   version=4
   name='zfs'
   state=0
   txg=7855332
   pool_guid=3801622416844369872
   hostid=345240675
   hostname='sun'
   top_guid=4004063599069763239
   guid=4086156223654637831
   vdev_tree
   type='raidz'
   id=0
   guid=4004063599069763239
   nparity=1
   metaslab_array=13
   metaslab_shift=30
   ashift=9
   asize=109220462592
   is_log=0
   children[0]
   type='disk'
   id=0
   guid=4086156223654637831
   path='/dev/dsk/c6t1d0s0'
   devid='id1,[EMAIL PROTECTED]/a'
   phys_path='/[EMAIL PROTECTED],0/pci1022,[EMAIL 
PROTECTED]/pci10f1,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:a'
   whole_disk=1
   DTL=69
   children[1]
   type='disk'
   id=1
   guid=13320021127057678234
   path='/dev/dsk/c7t0d0s0'
   devid='id1,[EMAIL PROTECTED]/a'
   phys_path='/[EMAIL PROTECTED],0/pci1022,[EMAIL 
PROTECTED]/pci10f1,[EMAIL PROTECTED],1/[EMAIL PROTECTED],0:a'
   whole_disk=1
   DTL=68
   children[2]
   type='disk'
   id=2
   

Re: [zfs-discuss] zpool i/o error

2008-06-25 Thread Richard Elling
Victor Pajor wrote:
 When I mean about the error is:

 Where a system crashes, zfs just loses its references and thinks that disks 
 are not available.
 When in fact the same disk worked perfectly just before the motherboard crash.

 Not just asking. Isn't ZFS supposed to cope with this kind of crash ?

 There must be a way of diagnosing what is going on.
   

I believe this is working as designed.  A cache is kept in
/etc/zfs/zpool.cache which contains a list of the devices and
pools which should be imported automatically at boot time.
The alternative is to scan every device, which does not scale
well to large systems and can cause consternation for shared
storage clusters.  When you changed the motherboard, you
also changed the device list, which is why the pools were
not imported automatically at boot.  This is an unusual case,
but the solution is to export (thus removing the entries from
zpool.cache) and import (adding new entries to zpool.cache)
 -- richard

 Here is the description of the label when I change configuration of the hdd.
 What I did is just add another SCSI controller and added 2 disks.


 bash-3.00# zdb -l /dev/rdsk/c4t1d0s0
 
 LABEL 0
 
version=4
name='zfs'
state=0
txg=7855332
pool_guid=3801622416844369872
hostid=345240675
hostname='sun'
top_guid=4004063599069763239
guid=4086156223654637831
vdev_tree
type='raidz'
id=0
guid=4004063599069763239
nparity=1
metaslab_array=13
metaslab_shift=30
ashift=9
asize=109220462592
is_log=0
children[0]
type='disk'
id=0
guid=4086156223654637831
path='/dev/dsk/c6t1d0s0'
devid='id1,[EMAIL PROTECTED]/a'
phys_path='/[EMAIL PROTECTED],0/pci1022,[EMAIL 
 PROTECTED]/pci10f1,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:a'
whole_disk=1
DTL=69
children[1]
type='disk'
id=1
guid=13320021127057678234
path='/dev/dsk/c7t0d0s0'
devid='id1,[EMAIL PROTECTED]/a'
phys_path='/[EMAIL PROTECTED],0/pci1022,[EMAIL 
 PROTECTED]/pci10f1,[EMAIL PROTECTED],1/[EMAIL PROTECTED],0:a'
whole_disk=1
DTL=68
children[2]
type='disk'
id=2
guid=5212524563381
path='/dev/dsk/c7t1d0s0'
devid='id1,[EMAIL PROTECTED]/a'
phys_path='/[EMAIL PROTECTED],0/pci1022,[EMAIL 
 PROTECTED]/pci10f1,[EMAIL PROTECTED],1/[EMAIL PROTECTED],0:a'
whole_disk=1
DTL=22
 
 LABEL 1
 
version=4
name='zfs'
state=0
txg=7855332
pool_guid=3801622416844369872
hostid=345240675
hostname='sun'
top_guid=4004063599069763239
guid=4086156223654637831
vdev_tree
type='raidz'
id=0
guid=4004063599069763239
nparity=1
metaslab_array=13
metaslab_shift=30
ashift=9
asize=109220462592
is_log=0
children[0]
type='disk'
id=0
guid=4086156223654637831
path='/dev/dsk/c6t1d0s0'
devid='id1,[EMAIL PROTECTED]/a'
phys_path='/[EMAIL PROTECTED],0/pci1022,[EMAIL 
 PROTECTED]/pci10f1,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:a'
whole_disk=1
DTL=69
children[1]
type='disk'
id=1
guid=13320021127057678234
path='/dev/dsk/c7t0d0s0'
devid='id1,[EMAIL PROTECTED]/a'
phys_path='/[EMAIL PROTECTED],0/pci1022,[EMAIL 
 PROTECTED]/pci10f1,[EMAIL PROTECTED],1/[EMAIL PROTECTED],0:a'
whole_disk=1
DTL=68
children[2]
type='disk'
id=2
guid=5212524563381
path='/dev/dsk/c7t1d0s0'
devid='id1,[EMAIL PROTECTED]/a'
phys_path='/[EMAIL PROTECTED],0/pci1022,[EMAIL 
 PROTECTED]/pci10f1,[EMAIL PROTECTED],1/[EMAIL PROTECTED],0:a'
whole_disk=1
DTL=22
 
 LABEL 2
 
version=4
name='zfs'
state=0
txg=7855332
pool_guid=3801622416844369872
hostid=345240675
hostname='sun'
top_guid=4004063599069763239
guid=4086156223654637831
vdev_tree
type='raidz'
id=0
guid=4004063599069763239
nparity=1
metaslab_array=13
metaslab_shift=30
ashift=9
asize=109220462592
is_log=0
children[0]
type='disk'

Re: [zfs-discuss] zpool i/o error

2008-06-24 Thread Victor Pajor
# zpool export zfs
cannot open 'zfs': no such pool

any command other than zpool import will give connot open 'zfs': no such pool


I can't seem to find any useful information on this type of error.

Did anyone have this kind of problem ?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool i/o error

2008-06-22 Thread Tomas Ögren
On 21 June, 2008 - Victor Pajor sent me these 0,9K bytes:

 Another thing
 
 config:
 
 zfs FAULTED   corrupted data
   raidz1ONLINE
 c1t1d0  ONLINE
 c7t0d0  UNAVAIL   corrupted data
 c7t1d0  UNAVAIL   corrupted data
 
 c70d0  c71d0 don't exist, it's normal. they are c2t0d0  c2t1d0
 
 AVAILABLE DISK SELECTIONS:
0. c1t0d0 DEFAULT cyl 4424 alt 2 hd 255 sec 63
   /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci10f1,[EMAIL 
 PROTECTED]/[EMAIL PROTECTED],0
1. c1t1d0 SEAGATE-ST336754LW-0005-34.18GB
   /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci10f1,[EMAIL 
 PROTECTED]/[EMAIL PROTECTED],0
2. c2t0d0 SEAGATE-ST336753LW-0005-34.18GB
   /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci10f1,[EMAIL 
 PROTECTED],1/[EMAIL PROTECTED],0
3. c2t1d0 SEAGATE-ST336753LW-HPS2-33.92GB
   /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci10f1,[EMAIL 
 PROTECTED],1/[EMAIL PROTECTED],0

zpool export zfs;zpool import zfs

/Tomas
-- 
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool i/o error

2008-06-21 Thread Richard Elling
Victor Pajor wrote:
 System description:
 1 root UFS with Solaris 10U5 x86
 1 raidz pool with 3 disks: (c6t1d0s0, c7t0d0s0, c7t1d0s0)

 Description:
 Just before the death of my motherboard, I've installed OpenSolaris 2008.05 - 
 x86. 
 Why do you ask, because I needed to test that it was the motherboard dying 
 and not any other hardware/software.
 That in mind, I've replaced my motherboard and since OpenSolais was already 
 installed, I've decided to give it a try.

 #zpool import -f zfs
 i/o error

 The only thing that I've noticed is that my device ids changed from c6t1d0s0 
 to c4t1d0s0.

 So I've decided to switch back to Solaris 10U5, but the same thing happens.
 i/o error

 Since I know that my disks are operational and hadn't been accessed since the 
 board replacement, I assume that my data is still available but not seen by 
 zfs because ids have changed.
   

No, ZFS will find the disks.  Something else is wrong.

 Can someone please tell me how can I get back my data ?
   

The I/O error generally means that a device cannot be read.
Try a simple zpool import and see what pools it thinks are
available.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool i/o error

2008-06-21 Thread Victor Pajor
Thank you for your fast reply.
You where right. There is something else wrong.

# zpool import
  pool: zfs
id: 3801622416844369872
 state: FAULTED
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on on another system, but can be imported using
the '-f' flag.
   see: http://www.sun.com/msg/ZFS-8000-5E
config:

zfs FAULTED   corrupted data
  raidz1ONLINE
c1t1d0  ONLINE
c7t0d0  UNAVAIL   corrupted data
c7t1d0  UNAVAIL   corrupted data


NOW !!! How can that be ? It was running ok before I changed the motheboard.
Right before changing it, the system crashed, zfs isn't supposed to handle this 
?
How can I get the data back. Or diagnose further the problem ?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool i/o error

2008-06-21 Thread Victor Pajor
Another thing

config:

zfs FAULTED   corrupted data
  raidz1ONLINE
c1t1d0  ONLINE
c7t0d0  UNAVAIL   corrupted data
c7t1d0  UNAVAIL   corrupted data

c70d0  c71d0 don't exist, it's normal. they are c2t0d0  c2t1d0

AVAILABLE DISK SELECTIONS:
   0. c1t0d0 DEFAULT cyl 4424 alt 2 hd 255 sec 63
  /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci10f1,[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0
   1. c1t1d0 SEAGATE-ST336754LW-0005-34.18GB
  /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci10f1,[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0
   2. c2t0d0 SEAGATE-ST336753LW-0005-34.18GB
  /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci10f1,[EMAIL 
PROTECTED],1/[EMAIL PROTECTED],0
   3. c2t1d0 SEAGATE-ST336753LW-HPS2-33.92GB
  /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci10f1,[EMAIL 
PROTECTED],1/[EMAIL PROTECTED],0
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool i/o error

2008-06-20 Thread Victor Pajor
System description:
1 root UFS with Solaris 10U5 x86
1 raidz pool with 3 disks: (c6t1d0s0, c7t0d0s0, c7t1d0s0)

Description:
Just before the death of my motherboard, I've installed OpenSolaris 2008.05 - 
x86. 
Why do you ask, because I needed to test that it was the motherboard dying and 
not any other hardware/software.
That in mind, I've replaced my motherboard and since OpenSolais was already 
installed, I've decided to give it a try.

#zpool import -f zfs
i/o error

The only thing that I've noticed is that my device ids changed from c6t1d0s0 to 
c4t1d0s0.

So I've decided to switch back to Solaris 10U5, but the same thing happens.
i/o error

Since I know that my disks are operational and hadn't been accessed since the 
board replacement, I assume that my data is still available but not seen by zfs 
because ids have changed.

Can someone please tell me how can I get back my data ?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss