Re: [zfs-discuss] cannot import pool from another system, device-ids different! please help!

2010-05-25 Thread hmmmm
eon:1:~#zdb -l /dev/rdsk/c1d0

LABEL 0

failed to unpack label 0

LABEL 1

failed to unpack label 1

LABEL 2

failed to unpack label 2

LABEL 3

failed to unpack label 3


same for the other five drives in the pool
what now?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot import pool from another system, device-ids different! please help!

2010-05-25 Thread eXeC001er
try to zdb -l /dev/rdsk/c1d0s0

2010/5/25 h bajsadb...@pleasespam.me

 eon:1:~#zdb -l /dev/rdsk/c1d0
 
 LABEL 0
 
 failed to unpack label 0
 
 LABEL 1
 
 failed to unpack label 1
 
 LABEL 2
 
 failed to unpack label 2
 
 LABEL 3
 
 failed to unpack label 3


 same for the other five drives in the pool
 what now?
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot import pool from another system, device-ids different! please help!

2010-05-25 Thread hmmmm
eon:6:~#zdb -l /dev/rdsk/c1d0s0

LABEL 0

version: 22
name: 'videodrome'
state: 0
txg: 55561
pool_guid: 5063071388564101079
hostid: 919514
hostname: 'Videodrome'
top_guid: 15080595385902860350
guid: 12602499757569516679
vdev_children: 1
vdev_tree:
type: 'raidz'
id: 0
guid: 15080595385902860350
nparity: 1
metaslab_array: 23
metaslab_shift: 35
ashift: 9
asize: 6001149345792
is_log: 0
children[0]:
type: 'disk'
id: 0
guid: 5800353223031346021
path: '/dev/dsk/c1t0d0s0'
devid: 'id1,s...@awdc_wd20eads-00s2b0=_wd-wcavy1123096/a'
phys_path: '/p...@0,0/pci1043,8...@5/d...@0,0:a'
whole_disk: 1
DTL: 30
children[1]:
type: 'disk'
id: 1
guid: 11924500712739180074
path: '/dev/dsk/c1t1d0s0'
devid: 'id1,s...@awdc_wd20eads-00s2b0=_wd-wcavy1089951/a'
phys_path: '/p...@0,0/pci1043,8...@5/d...@1,0:a'
whole_disk: 1
DTL: 31
children[2]:
type: 'disk'
id: 2
guid: 6297108650128259181
path: '/dev/dsk/c10t0d0s0'
devid: 'id1,s...@awdc_wd20eads-00s2b0=_wd-wcavy1089667/a'
phys_path: '/p...@0,0/pci1043,8...@5,1/d...@0,0:a'
whole_disk: 1
DTL: 32
children[3]:
type: 'disk'
id: 3
guid: 828343558065682349
path: '/dev/dsk/c0t1d0s0'
devid: 'id1,s...@awdc_wd20eads-00s2b0=_wd-wcavy1098856/a'
phys_path: '/p...@0,0/pci1043,8...@5,1/d...@1,0:a'
whole_disk: 1
DTL: 33
children[4]:
type: 'disk'
id: 4
guid: 16604516587932073210
path: '/dev/dsk/c11t0d0s0'
devid: 'id1,s...@awdc_wd20eads-00s2b0=_wd-wcavy1117911/a'
phys_path: '/p...@0,0/pci1043,8...@5,2/d...@0,0:a'
whole_disk: 1
DTL: 34
children[5]:
type: 'disk'
id: 5
guid: 12602499757569516679
path: '/dev/dsk/c11t1d0s0'
devid: 'id1,s...@asamsung_hd103uj=s13pjdws256953/a'
phys_path: '/p...@0,0/pci1043,8...@5,2/d...@1,0:a'
whole_disk: 1
DTL: 57

LABEL 1

version: 22
name: 'videodrome'
state: 0
txg: 55561
pool_guid: 5063071388564101079
hostid: 919514
hostname: 'Videodrome'
top_guid: 15080595385902860350
guid: 12602499757569516679
vdev_children: 1
vdev_tree:
type: 'raidz'
id: 0
guid: 15080595385902860350
nparity: 1
metaslab_array: 23
metaslab_shift: 35
ashift: 9
asize: 6001149345792
is_log: 0
children[0]:
type: 'disk'
id: 0
guid: 5800353223031346021
path: '/dev/dsk/c1t0d0s0'
devid: 'id1,s...@awdc_wd20eads-00s2b0=_wd-wcavy1123096/a'
phys_path: '/p...@0,0/pci1043,8...@5/d...@0,0:a'
whole_disk: 1
DTL: 30
children[1]:
type: 'disk'
id: 1
guid: 11924500712739180074
path: '/dev/dsk/c1t1d0s0'
devid: 'id1,s...@awdc_wd20eads-00s2b0=_wd-wcavy1089951/a'
phys_path: '/p...@0,0/pci1043,8...@5/d...@1,0:a'
whole_disk: 1
DTL: 31
children[2]:
type: 'disk'
id: 2
guid: 6297108650128259181
path: '/dev/dsk/c10t0d0s0'
devid: 'id1,s...@awdc_wd20eads-00s2b0=_wd-wcavy1089667/a'
phys_path: '/p...@0,0/pci1043,8...@5,1/d...@0,0:a'
whole_disk: 1
DTL: 32
children[3]:
type: 'disk'
id: 3
guid: 828343558065682349
path: '/dev/dsk/c0t1d0s0'
devid: 'id1,s...@awdc_wd20eads-00s2b0=_wd-wcavy1098856/a'
phys_path: '/p...@0,0/pci1043,8...@5,1/d...@1,0:a'
whole_disk: 1
DTL: 33
children[4]:
type: 'disk'
id: 4
guid: 16604516587932073210
path: '/dev/dsk/c11t0d0s0'
devid: 'id1,s...@awdc_wd20eads-00s2b0=_wd-wcavy1117911/a'
phys_path: '/p...@0,0/pci1043,8...@5,2/d...@0,0:a'
whole_disk: 1
DTL: 34
children[5]:
type: 'disk'
id: 5
guid: 12602499757569516679
path: '/dev/dsk/c11t1d0s0'
devid: 'id1,s...@asamsung_hd103uj=s13pjdws256953/a'
phys_path: '/p...@0,0/pci1043,8...@5,2/d...@1,0:a'
whole_disk: 1

[zfs-discuss] cannot import pool from another system, device-ids different! please help!

2010-05-24 Thread hmmmm
Hi!
i had 6 disks in a raidz1 pool that i replaced from 1TB drives to 2TB drives.
i have installed the older 1TB drives in another system and would like to import
the old pool to access some files i accidentally deleted from the new pool.

the first system (with the 2TB's) is a Opensolaris system and the other is 
running
EON solaris (based on snv 130)

I think the problem is that in the EON system, the drives get different ID's
and when i replaced the 1TB drives i didnt export the pool.
only one drive show up as online, is this because it is the only one 
connected in the right order? i dont remember which order the drives where 
connected to the controller in the Opensolaris system.


what can i do to import this pool
HELP!!!

eon:1:~#uname -a
SunOS eon 5.11 snv_130 i86pc i386 i86pc

eon:2:~#format
Searching for disks...done

AVAILABLE DISK SELECTIONS:
   0. c1d0 SAMSUNG-S13PJDWS25695-0001-931.51GB
  /p...@0,0/pci-...@d/i...@0/c...@0,0
   1. c2d0 SAMSUNG-S13PJDWS25725-0001-931.51GB
  /p...@0,0/pci-...@d/i...@1/c...@0,0
   2. c3d0 SAMSUNG-S13PJDWS25695-0001-931.51GB
  /p...@0,0/pci-...@d,1/i...@0/c...@0,0
   3. c4d0 SAMSUNG-S13PJDWS25695-0001-931.51GB
  /p...@0,0/pci-...@d,1/i...@1/c...@0,0
   4. c5d0 SAMSUNG-S13PJ1KQ40672-0001-931.51GB
  /p...@0,0/pci-...@d,2/i...@0/c...@0,0
   5. c6d0 SAMSUNG-S13PJ1KQ40672-0001-931.51GB
  /p...@0,0/pci-...@d,2/i...@1/c...@0,0
Specify disk (enter its number):

eon:3:~#zpool import
  pool: videodrome
id: 5063071388564101079
 state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
   see: http://www.sun.com/msg/ZFS-8000-EY
config:

videodrome   UNAVAIL  insufficient replicas
  raidz1-0   UNAVAIL  insufficient replicas
c1t0d0   UNAVAIL  cannot open
c1t1d0   UNAVAIL  cannot open
c10t0d0  UNAVAIL  cannot open
c0t1d0   UNAVAIL  cannot open
c11t0d0  UNAVAIL  cannot open
c1d0 ONLINE
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot import pool from another system, device-ids different! please help!

2010-05-24 Thread Mark J Musante

On Mon, 24 May 2010, h wrote:

i had 6 disks in a raidz1 pool that i replaced from 1TB drives to 2TB 
drives. i have installed the older 1TB drives in another system and 
would like to import the old pool to access some files i accidentally 
deleted from the new pool.


Did you use the 'zpool replace' command to do the replace?  If so, once 
the replace completes, the ZFS label on the original disk is overwritten 
to make it available for new pools.



Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot import pool from another system, device-ids different! please help!

2010-05-24 Thread hmmmm
yes i used zpool replace.
why is one drive recognized? 
shouldnt the labels be wiped on all of them?

am i screwed?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot import pool from another system, device-ids different! please help!

2010-05-24 Thread hmmmm
but...wait..that cant be.
i disconnected the 1TB drives and plugged in the 2TB's before doing replace 
command. no information could be written to the 1TBs at all since it is 
physically offline.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss