Re: [zfs-discuss] Crazy Phantom Zpools Again

2009-09-19 Thread Victor Latushkin

On 18.09.09 22:18, Dave Abrahams wrote:

I just did a fresh reinstall of OpenSolaris and I'm again seeing
the phenomenon described in 
http://article.gmane.org/gmane.os.solaris.opensolaris.zfs/26259

which I posted many months ago and got no reply to.

Can someone *please* help me figure out what's going on here?


Can you provide output of

zdb -l /dev/rdsk/c8t1d0p0
zdb -l /dev/rdsk/c8t1d0s0

zdb -l /dev/rdsk/c9t0d0p0
zdb -l /dev/rdsk/c9t0d0s0

zdb -l /dev/rdsk/c9t1d0p0
zdb -l /dev/rdsk/c9t1d0s0

as a starter?

I suspect there's some stale labels accessible through ...p0 devices (may be 
back labels only that unfortunately allow to open some pools that existed before.


So let's start finding this out.

victor
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Crazy Phantom Zpools Again

2009-09-19 Thread David Abrahams

Hey, thanks for following up.

on Sat Sep 19 2009, Victor Latushkin Victor.Latushkin-AT-Sun.COM wrote:

 Can you provide output of

 zdb -l /dev/rdsk/c8t1d0p0
 zdb -l /dev/rdsk/c8t1d0s0

 zdb -l /dev/rdsk/c9t0d0p0
 zdb -l /dev/rdsk/c9t0d0s0

 zdb -l /dev/rdsk/c9t1d0p0
 zdb -l /dev/rdsk/c9t1d0s0

 as a starter?

 I suspect there's some stale labels accessible through ...p0 devices (may be 
 back
 labels only that unfortunately allow to open some pools that existed before.

 So let's start finding this out.

d...@hoss:~# zdb -l /dev/rdsk/c8t1d0p0

LABEL 0

version=14
name='Xc8t1d0p0'
state=0
txg=67
pool_guid=799109629249470450
hostid=674932
hostname='hoss'
top_guid=14688829453117747875
guid=14688829453117747875
vdev_tree
type='disk'
id=0
guid=14688829453117747875
path='/dev/dsk/c8t1d0p0'
devid='id1,s...@sata_st3500641as_3pm0bxs4/q'
phys_path='/p...@0,0/pci10f1,2...@7/d...@1,0:q'
whole_disk=0
metaslab_array=23
metaslab_shift=32
ashift=9
asize=500103118848
is_log=0

LABEL 1

version=14
name='Xc8t1d0p0'
state=0
txg=67
pool_guid=799109629249470450
hostid=674932
hostname='hoss'
top_guid=14688829453117747875
guid=14688829453117747875
vdev_tree
type='disk'
id=0
guid=14688829453117747875
path='/dev/dsk/c8t1d0p0'
devid='id1,s...@sata_st3500641as_3pm0bxs4/q'
phys_path='/p...@0,0/pci10f1,2...@7/d...@1,0:q'
whole_disk=0
metaslab_array=23
metaslab_shift=32
ashift=9
asize=500103118848
is_log=0

LABEL 2

version=14
name='Xc8t1d0p0'
state=0
txg=67
pool_guid=799109629249470450
hostid=674932
hostname='hoss'
top_guid=14688829453117747875
guid=14688829453117747875
vdev_tree
type='disk'
id=0
guid=14688829453117747875
path='/dev/dsk/c8t1d0p0'
devid='id1,s...@sata_st3500641as_3pm0bxs4/q'
phys_path='/p...@0,0/pci10f1,2...@7/d...@1,0:q'
whole_disk=0
metaslab_array=23
metaslab_shift=32
ashift=9
asize=500103118848
is_log=0

LABEL 3

version=14
name='Xc8t1d0p0'
state=0
txg=67
pool_guid=799109629249470450
hostid=674932
hostname='hoss'
top_guid=14688829453117747875
guid=14688829453117747875
vdev_tree
type='disk'
id=0
guid=14688829453117747875
path='/dev/dsk/c8t1d0p0'
devid='id1,s...@sata_st3500641as_3pm0bxs4/q'
phys_path='/p...@0,0/pci10f1,2...@7/d...@1,0:q'
whole_disk=0
metaslab_array=23
metaslab_shift=32
ashift=9
asize=500103118848
is_log=0
d...@hoss:~# zdb -l /dev/rdsk/c8t1d0s0


LABEL 0

version=14
name='tank'
state=0
txg=321059
pool_guid=18040158237637153559
hostid=674932
hostname='hoss'
top_guid=17370712873548817583
guid=5539950970989281033
vdev_tree
type='raidz'
id=0
guid=17370712873548817583
nparity=2
metaslab_array=23
metaslab_shift=35
ashift=9
asize=4000755744768
is_log=0
children[0]
type='disk'
id=0
guid=17720655760296015906
path='/dev/dsk/c8t0d0s0'
devid='id1,s...@sata_st3500641as_3pm0j4rw/a'
phys_path='/p...@0,0/pci10f1,2...@7/d...@0,0:a'
whole_disk=1
DTL=35
children[1]
type='disk'
id=1
guid=5539950970989281033
path='/dev/dsk/c8t1d0s0'
devid='id1,s...@sata_st3500641as_3pm0bxs4/a'
phys_path='/p...@0,0/pci10f1,2...@7/d...@1,0:a'
whole_disk=1
DTL=34
children[2]
type='disk'
id=2
guid=11100368085398512076
path='/dev/dsk/c9t0d0s0'
devid='id1,s...@sata_wdc_wd5000aacs-0_wd-wcasu4279114/a'
phys_path='/p...@0,0/pci10f1,2...@8/d...@0,0:a'
whole_disk=1
DTL=33
children[3]
type='disk'
id=3
guid=6967063319981472993
path='/dev/dsk/c9t1d0s0'

Re: [zfs-discuss] Crazy Phantom Zpools Again

2009-09-19 Thread David Abrahams

on Fri Sep 18 2009, Cindy Swearingen Cindy.Swearingen-AT-Sun.COM wrote:


 Not much help, but some ideas:

 1. What does the zpool history -l output say for the phantom pools?

d...@hoss:~#  zpool history -l Xc8t1d0p0
History for 'Xc8t1d0p0':
2009-05-14.06:00:20 zpool create Xc8t1d0p0 c8t1d0p0 [user root on 
hydrasol:global]
2009-06-07.21:42:44 zpool export Xc8t1d0p0 Xc9t0d0p0 [user root on hoss:global]

d...@hoss:~# zpool history -l Xc9t0d0p0
History for 'Xc9t0d0p0':
2009-05-14.06:00:24 zpool create Xc9t0d0p0 c9t0d0p0 [user root on 
hydrasol:global]

d...@hoss:~# zpool history -l Xc9t1d0p0
History for 'Xc9t1d0p0':
2009-05-14.06:00:26 zpool create Xc9t1d0p0 c9t1d0p0 [user root on 
hydrasol:global]
2009-06-07.21:30:42 zpool import -a -f [user root on hoss:global]
2009-06-07.21:42:51 zpool export Xc8t1d0p0 Xc9t1d0p0 [user root on hoss:global]
2009-09-17.15:04:23 zpool import -a [user root on hoss:global]

d...@hoss:~# 

 Were they created at the same time as the root pool or the same time
 as tank?

No, earlier apparently, and they've been through a few OS reinstalls

 2. The phantom pools contain the c8t1* and c9t1* fdisk partitions (p0s) that 
 are in
 your tank pool as whole disks. A strange coincidence.

 Does zdb output or fmdump output identify the relationship, if
 any, between the c8 and c9 devices in the phantom pools and tank?

I don't know how to read that stuff, but I've attached my zdb output.
fmdump is essentially empty.

 3. I can file a bug for you. Please provide the system information,
 such as hardware, disks, OS release.

Thanks.  The hardware is all described at
http://techarcana.net/hydra/hardware/.  The OS release is OpenSolaris
0906 with latest updates.

d...@hoss:~# zdb
Xc8t1d0p0
version=14
name='Xc8t1d0p0'
state=0
txg=67
pool_guid=799109629249470450
hostid=674932
hostname='hoss'
vdev_tree
type='root'
id=0
guid=799109629249470450
children[0]
type='disk'
id=0
guid=14688829453117747875
path='/dev/dsk/c8t1d0p0'
devid='id1,s...@sata_st3500641as_3pm0bxs4/q'
phys_path='/p...@0,0/pci10f1,2...@7/d...@1,0:q'
whole_disk=0
metaslab_array=23
metaslab_shift=32
ashift=9
asize=500103118848
is_log=0
Xc9t0d0p0
version=14
name='Xc9t0d0p0'
state=0
txg=66
pool_guid=12655905567020654415
hostid=674932
hostname='hoss'
vdev_tree
type='root'
id=0
guid=12655905567020654415
children[0]
type='disk'
id=0
guid=611575587582790566
path='/dev/dsk/c9t0d0p0'
devid='id1,s...@sata_wdc_wd5000aacs-0_wd-wcasu4279114/q'
phys_path='/p...@0,0/pci10f1,2...@8/d...@0,0:q'
whole_disk=0
metaslab_array=23
metaslab_shift=32
ashift=9
asize=500103118848
is_log=0
Xc9t1d0p0
version=14
name='Xc9t1d0p0'
state=0
txg=67
pool_guid=13088732420232844728
hostid=674932
hostname='hoss'
vdev_tree
type='root'
id=0
guid=13088732420232844728
children[0]
type='disk'
id=0
guid=5881429924050167143
path='/dev/dsk/c9t1d0p0'
devid='id1,s...@sata_wdc_wd5000aacs-0_wd-wcasu3010505/q'
phys_path='/p...@0,0/pci10f1,2...@8/d...@1,0:q'
whole_disk=0
metaslab_array=23
metaslab_shift=32
ashift=9
asize=500103118848
is_log=0
rpool
version=14
name='rpool'
state=0
txg=3151
pool_guid=8480802010740526288
hostid=674932
hostname='hoss'
vdev_tree
type='root'
id=0
guid=8480802010740526288
children[0]
type='disk'
id=0
guid=10153492011253799981
path='/dev/dsk/c7d0s0'
devid='id1,c...@awdc_wd1600aajb-00j3a0=_wd-wcav30909252/a'
phys_path='/p...@0,0/pci-...@6/i...@0/c...@0,0:a'
whole_disk=0
metaslab_array=23
metaslab_shift=30
ashift=9
asize=160001425408
is_log=0
tank
version=14
name='tank'
state=0
txg=321059
pool_guid=18040158237637153559
hostid=674932
hostname='hoss'
vdev_tree
type='root'
id=0
guid=18040158237637153559
children[0]
type='raidz'
id=0
guid=17370712873548817583
nparity=2
metaslab_array=23
metaslab_shift=35
ashift=9
asize=4000755744768
 

Re: [zfs-discuss] Crazy Phantom Zpools Again

2009-09-18 Thread Cindy Swearingen

Dave,

I've searched opensolaris.org and our internal bug database.
I don't see that anyone else has reported this problem.

I asked someone from the OSOL install team and this behavior
is a mystery.

If you destroyed the phantom pools before you reinstalled,
then they probably returned from the import operations but
I can't be sure.

If you want to export your tank pool and re-import it, then
maybe you should just use zpool import tank until the root
cause of the phantom pools are determined.


Not much help, but some ideas:

1. What does the zpool history -l output say for the phantom pools?
Were they created at the same time as the root pool or the same time
as tank?

2. The phantom pools contain the c8t1* and c9t1* fdisk partitions (p0s) 
that are in your tank pool as whole disks. A strange coincidence.


Does zdb output or fmdump output identify the relationship, if
any, between the c8 and c9 devices in the phantom pools and tank?

3. I can file a bug for you. Please provide the system information,
such as hardware, disks, OS release.


Cindy



On 09/18/09 12:18, Dave Abrahams wrote:

I just did a fresh reinstall of OpenSolaris and I'm again seeing
the phenomenon described in 
http://article.gmane.org/gmane.os.solaris.opensolaris.zfs/26259

which I posted many months ago and got no reply to.

Can someone *please* help me figure out what's going on here?

Thanks in Advance,
--
Dave Abrahams
BoostPro Computing
http://boostpro.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss