Hi all,

I have a server that is build on top of an Asus board which is equipped with an Areca 1680 HBA. Since ZFS like raw disks, I changed its mode from RAID to JBOD in the firmware and rebootet the host.
Now, I do have 16 drives in the chassis and the line out like this:

root@vsm01:~# format
Searching for disks...done

c5t0d0: configured with capacity of 978.00MB

       0. c3t1d0 <SEAGATE-ST3300655SS-R001 cyl 36469 alt 2 hd 255 sec 63>
       1. c3t1d1 <Seagate-ST3500320NS-R001 cyl 60799 alt 2 hd 255 sec 63>
       2. c3t1d2 <Seagate-ST3500320NS-R001-465.76GB>
       3. c3t1d3 <Seagate-ST3500320NS-R001 cyl 60799 alt 2 hd 255 sec 63>
       4. c3t1d4 <Seagate-ST3500320NS-R001 cyl 60799 alt 2 hd 255 sec 63>
       5. c3t1d5 <Seagate-ST3500320NS-R001-465.76GB>
       6. c3t1d6 <Seagate-ST3500320NS-R001-465.76GB>
       7. c3t1d7 <Seagate-ST3500320NS-R001 cyl 60799 alt 2 hd 255 sec 63>
       8. c3t2d0 <Seagate-ST3500320NS-R001-465.76GB>
       9. c3t2d1 <Seagate-ST3500320NS-R001-465.76GB>
10. c3t2d2 <Hitachi-HUA722010CLA330-R001 cyl 60782 alt 2 hd 255 sec 126>
      11. c3t2d3 <Hitachi-HUA722010CLA330-R001-931.51GB>
      12. c3t2d4 <Hitachi-HUA722010CLA330-R001-931.51GB>
      13. c3t2d5 <Hitachi-HUA722010CLA330-R001-931.51GB>
14. c3t2d6 <Hitachi-HUA722010CLA330-R001 cyl 60798 alt 2 hd 255 sec 126>
      15. c3t2d7 <Hitachi-HUA722010CLA330-R001-931.51GB>
      16. c5t0d0 <Generic-USB EDC-1.00 cyl 978 alt 2 hd 64 sec 32>

Looked okay for me, so I went ahead an created a zpool containing two mirrors like this:

root@vsm01:~# zpool create vsm_pool1_1T mirror c3t2d1 c3t1d6 mirror c3t2d0 c3t1d5

This went just fine and the zpool was created like this:

root@vsm01:~# zpool status vsm_pool1_1T
  pool: vsm_pool1_1T
 state: ONLINE
  scan: none requested

        NAME        STATE     READ WRITE CKSUM
        vsm_pool1_1T  ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c3t2d1  ONLINE       0     0     0
            c3t1d6  ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            c3t2d0  ONLINE       0     0     0
            c3t1d5  ONLINE       0     0     0

errors: No known data errors

Now, creating another zpool from the remaining 500GB drives failed with this weird error:

root@vsm01:~# zpool create vsm_pool2_1T mirror c3t1d4 c3t1d3mirror c3t1d2 c3t1d1
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c3t2d1s0 is part of active ZFS pool vsm_pool1_1T. Please see zpool(1M).

Anybody has an idea of what is going wrong here. It doesn't seem to matter which of the drives I want to use for the new zpool, I am always getting this error message - ven when trying only mirror c3t1d4 c3t1d3 or mirror c3t1d2 c3t1d1 alone.


zfs-discuss mailing list

Reply via email to