No typo, which is why I think this is a bug somewhere.  d14 has never been a 
member of nalgene.  d0 is an active disk in pool nalgene.

My best guess is that there is an incorrect assumption somewhere that assumes 
the disk is always d0 (probably because it is more common to have single-disk 
targets rather than multidisk targets?).  Even running format gives a warning:
hostname:~# format /dev/rdsk/c6t21800080E512C872d14s0
selecting /dev/rdsk/c6t21800080E512C872d14s0
[disk formatted]
/dev/dsk/c6t21800080E512C872d0s0 is part of active ZFS pool nalgene. Please see 
zpool(1M).

(The same warning occurs if I use s2 instead of s0.)



Output of the 'zdb -l /dev/dsk/c6t21800080E512C872d14s0'  command:
--------------------------------------------
LABEL 0
--------------------------------------------
    version=4
    name='bottlecap'
    state=0
    txg=4
    pool_guid=14253076362145821680
    top_guid=16379344036138178892
    guid=16379344036138178892
    vdev_tree
        type='disk'
        id=0
        guid=16379344036138178892
        path='/dev/dsk/c6t21800080E512C872d14s0'
        devid='id1,[EMAIL PROTECTED]/a'
        whole_disk=0
        metaslab_array=17
        metaslab_shift=29
        ashift=9
        asize=72782970880
--------------------------------------------
LABEL 1
--------------------------------------------
    version=4
    name='bottlecap'
    state=0
    txg=4
    pool_guid=14253076362145821680
    top_guid=16379344036138178892
    guid=16379344036138178892
    vdev_tree
        type='disk'
        id=0
        guid=16379344036138178892
        path='/dev/dsk/c6t21800080E512C872d14s0'
        devid='id1,[EMAIL PROTECTED]/a'
        whole_disk=0
        metaslab_array=17
        metaslab_shift=29
        ashift=9
        asize=72782970880
--------------------------------------------
LABEL 2
--------------------------------------------
    version=4
    name='bottlecap'
    state=0
    txg=4
    pool_guid=14253076362145821680
    top_guid=16379344036138178892
    guid=16379344036138178892
    vdev_tree
        type='disk'
        id=0
        guid=16379344036138178892
        path='/dev/dsk/c6t21800080E512C872d14s0'
        devid='id1,[EMAIL PROTECTED]/a'
        whole_disk=0
        metaslab_array=17
        metaslab_shift=29
        ashift=9
        asize=72782970880
--------------------------------------------
LABEL 3
--------------------------------------------
    version=4
    name='bottlecap'
    state=0
    txg=4
    pool_guid=14253076362145821680
    top_guid=16379344036138178892
    guid=16379344036138178892
    vdev_tree
        type='disk'
        id=0
        guid=16379344036138178892
        path='/dev/dsk/c6t21800080E512C872d14s0'
        devid='id1,[EMAIL PROTECTED]/a'
        whole_disk=0
        metaslab_array=17
        metaslab_shift=29
        ashift=9
        asize=72782970880
--end output--

William Yang
  ----- Original Message ----- 
  From: Robin Guo 
  To: Jeff Cheeney 
  Cc: William Yang ; [EMAIL PROTECTED] ; [email protected] 
  Sent: Wednesday, May 14, 2008 11:07 PM
  Subject: Re: [zfs-discuss] [storage-discuss] ZFS and fibre channel issues


  Hi, William,

    You didn't mention c6t21800080E512C872d0s0 in your command line,
  maybe typo of c6t21800080E512C872d14s0?

    # zpool create bottlecap c6t21800080E512C872d14 c6t21800080E512C872d15 

    The warrning looks like by the remain label info may reside on you disk.
  Could you see output by 'zdb -l /dev/dsk/c6t21800080E512C872d14s0' ? it should
  has something related to nalgene. But anyway, if you have re-used that disk 
to create
  new pool, I suspect this issue has gone.

    - Regards,

  Jeff Cheeney wrote: 
The ZFS crew might be better to answer this question. (CC'd here)

       --jc

William Yang wrote:
  I am having issues creating a zpool using entire disks with a fibre 
channel array.  The array is a Dell PowerVault 660F.
When I run "zpool create bottlecap c6t21800080E512C872d14 
c6t21800080E512C872d15", I get the following error:
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c6t21800080E512C872d0s0 is part of active ZFS pool nalgene. 
Please see zpool(1M).
I was able to create pool nalgene by using entire disks, but this was 
awhile back.  I have a feeling one of the patches I applied after 
creating nalgene broke something.  I am currently using Solaris 10 
SPARC 8/07 kernel 127111-11.
 
Also, if I append s0 to the disk name (i.e. c6t21800080E512C872d14s0), 
then I can create the new zpool.  Any ideas?
 
William Yang
------------------------------------------------------------------------

_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss
  
    
_______________________________________________
zfs-discuss mailing list
[EMAIL PROTECTED]
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


-- 
Regards,

Robin Guo, Xue-Bin Guo
Solaris Kernel and Data Service QE,
Sun China Engineering and Reserch Institute
Phone: +86 10 82618200 +82296
Email: [EMAIL PROTECTED]
Blog: http://blogs.sun.com/robinguo
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

Reply via email to