Am 08.12.11 18:14, schrieb Stephan Budach:
Hi all,

I have a server that is build on top of an Asus board which is equipped with an Areca 1680 HBA. Since ZFS like raw disks, I changed its mode from RAID to JBOD in the firmware and rebootet the host.
Now, I do have 16 drives in the chassis and the line out like this:

root@vsm01:~# format
Searching for disks...done

c5t0d0: configured with capacity of 978.00MB


AVAILABLE DISK SELECTIONS:
       0. c3t1d0 <SEAGATE-ST3300655SS-R001 cyl 36469 alt 2 hd 255 sec 63>
          /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@1,0
       1. c3t1d1 <Seagate-ST3500320NS-R001 cyl 60799 alt 2 hd 255 sec 63>
          /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@1,1
       2. c3t1d2 <Seagate-ST3500320NS-R001-465.76GB>
          /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@1,2
       3. c3t1d3 <Seagate-ST3500320NS-R001 cyl 60799 alt 2 hd 255 sec 63>
          /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@1,3
       4. c3t1d4 <Seagate-ST3500320NS-R001 cyl 60799 alt 2 hd 255 sec 63>
          /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@1,4
       5. c3t1d5 <Seagate-ST3500320NS-R001-465.76GB>
          /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@1,5
       6. c3t1d6 <Seagate-ST3500320NS-R001-465.76GB>
          /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@1,6
       7. c3t1d7 <Seagate-ST3500320NS-R001 cyl 60799 alt 2 hd 255 sec 63>
          /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@1,7
       8. c3t2d0 <Seagate-ST3500320NS-R001-465.76GB>
          /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@2,0
       9. c3t2d1 <Seagate-ST3500320NS-R001-465.76GB>
          /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@2,1
10. c3t2d2 <Hitachi-HUA722010CLA330-R001 cyl 60782 alt 2 hd 255 sec 126>
          /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@2,2
      11. c3t2d3 <Hitachi-HUA722010CLA330-R001-931.51GB>
          /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@2,3
      12. c3t2d4 <Hitachi-HUA722010CLA330-R001-931.51GB>
          /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@2,4
      13. c3t2d5 <Hitachi-HUA722010CLA330-R001-931.51GB>
          /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@2,5
14. c3t2d6 <Hitachi-HUA722010CLA330-R001 cyl 60798 alt 2 hd 255 sec 126>
          /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@2,6
      15. c3t2d7 <Hitachi-HUA722010CLA330-R001-931.51GB>
          /pci@0,0/pci8086,29f1@1/pci17d3,1680@0/disk@2,7
      16. c5t0d0 <Generic-USB EDC-1.00 cyl 978 alt 2 hd 64 sec 32>
          /pci@0,0/pci1043,819e@1d,7/storage@4/disk@0,0

Looked okay for me, so I went ahead an created a zpool containing two mirrors like this:

root@vsm01:~# zpool create vsm_pool1_1T mirror c3t2d1 c3t1d6 mirror c3t2d0 c3t1d5

This went just fine and the zpool was created like this:

root@vsm01:~# zpool status vsm_pool1_1T
  pool: vsm_pool1_1T
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        vsm_pool1_1T  ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c3t2d1  ONLINE       0     0     0
            c3t1d6  ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            c3t2d0  ONLINE       0     0     0
            c3t1d5  ONLINE       0     0     0

errors: No known data errors

Now, creating another zpool from the remaining 500GB drives failed with this weird error:

root@vsm01:~# zpool create vsm_pool2_1T mirror c3t1d4 c3t1d3mirror c3t1d2 c3t1d1
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c3t2d1s0 is part of active ZFS pool vsm_pool1_1T. Please see zpool(1M).

Anybody has an idea of what is going wrong here. It doesn't seem to matter which of the drives I want to use for the new zpool, I am always getting this error message - ven when trying only mirror c3t1d4 c3t1d3 or mirror c3t1d2 c3t1d1 alone.

Thanks,
budy
Hmm… now I did it the other way round and this time it worked as expected. Seems that these disks have been used in some other zpool configurations without being properly removed and thus they may had some old labels on them, right?

Thanks
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to