I'm trying to add some additional devices to my existing pool, but it's not 
working.  I'm adding a raidz group of 5 300 GB drives, but the command always 
fails: 

r...@kronos:/ # zpool add raid raidz c8t8d0 c8t13d0 c7t8d0 c3t8d0 c5t8d0
Assertion failed: nvlist_lookup_string(cnv, "path", &path) == 0, file 
zpool_vdev.c, line 631
Abort (core dumped)

The disks all work, were labeled easily using 'format' after zfs and other 
tools refused to look at them. 
Creating a UFS filesystem with newfs on them runs with no issues, but I can't 
add them to the existing zpool.  

I can use the same devices to create a NEW zpool without issue. 

I fully patched up this system after encountering this problem, no change. 

The zpool to which I am adding them is fairly large and in a degraded state 
(three resilvers running, one that never seems to complete and two related to 
trying to add these new disks), but I didn't think that should prevent me from 
adding another vdev. 

For those who suggest waiting 20 minutes for the resilver to finish, it's been 
estimating < 30 minutes
for the last 12 hours, and we're running out of space, so I wanted to add the 
new devices sooner rather than later. 

Can anyone help? 

extra details below:  

r...@kronos:/ # uname -a
SunOS kronos 5.10 Generic_137137-09 sun4u sparc SUNW,Sun-Fire-480R

r...@kronos:/ # smpatch analyze 
137276-01 SunOS 5.10: uucico patch
122470-02 Gnome 2.6.0: GNOME Java Help Patch
121430-31 SunOS 5.8 5.9 5.10: Live Upgrade Patch
121428-11 SunOS 5.10: Live Upgrade Zones Support Patch

r...@kronos:patch # zpool list
NAME   SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
raid  4.32T  4.23T  92.1G    97%  DEGRADED  -

r...@kronos:patch # zpool status   
  pool: raid
 state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
        repaired.
 scrub: resilver in progress for 12h22m, 97.25% done, 0h20m to go
config:

        NAME                STATE     READ WRITE CKSUM
        raid                DEGRADED     0     0     0
          raidz1            ONLINE       0     0     0
            c9t0d0          ONLINE       0     0     0
            c6t0d0          ONLINE       0     0     0
            c2t0d0          ONLINE       0     0     0
            c4t0d0          ONLINE       0     0     0
            c10t0d0         ONLINE       0     0     0
          raidz1            ONLINE       0     0     0
            c9t1d0          ONLINE       0     0     0
            c6t1d0          ONLINE       0     0     0
            c2t1d0          ONLINE       0     0     0
            c4t1d0          ONLINE       0     0     0
            c10t1d0         ONLINE       0     0     0
          raidz1            ONLINE       0     0     0
            c9t3d0          ONLINE       0     0     0
            c6t3d0          ONLINE       0     0     0
            c2t3d0          ONLINE       0     0     0
            c4t3d0          ONLINE       0     0     0
            c10t3d0         ONLINE       0     0     0
          raidz1            DEGRADED     0     0     0
            c9t4d0          ONLINE       0     0     0
            spare           DEGRADED     0     0     0
              c5t13d0       ONLINE       0     0     0
              c6t4d0        FAULTED      0 12.3K     0  too many errors
            c2t4d0          ONLINE       0     0     0
            c4t4d0          ONLINE       0     0     0
            c10t4d0         ONLINE       0     0     0
          raidz1            DEGRADED     0     0     0
            c9t5d0          ONLINE       0     0     0
            spare           DEGRADED     0     0     0
              replacing     DEGRADED     0     0     0
                c6t5d0s0/o  UNAVAIL      0     0     0  cannot open
                c6t5d0      ONLINE       0     0     0
              c11t13d0      ONLINE       0     0     0
            c2t5d0          ONLINE       0     0     0
            c4t5d0          ONLINE       0     0     0
            c10t5d0         ONLINE       0     0     0
          raidz1            ONLINE       0     0     0
            c5t9d0          ONLINE       0     0     0
            c7t9d0          ONLINE       0     0     0
            c3t9d0          ONLINE       0     0     0
            c8t9d0          ONLINE       0     0     0
            c11t9d0         ONLINE       0     0     0
          raidz1            ONLINE       0     0     0
            c5t10d0         ONLINE       0     0     0
            c7t10d0         ONLINE       0     0     0
            c3t10d0         ONLINE       0     0     0
            c8t10d0         ONLINE       0     0     0
            c11t10d0        ONLINE       0     0     0
          raidz1            ONLINE       0     0     0
            c5t11d0         ONLINE       0     0     0
            c7t11d0         ONLINE       0     0     0
            c3t11d0         ONLINE       0     0     0
            c8t11d0         ONLINE       0     0     0
            c11t11d0        ONLINE       0     0     0
          raidz1            ONLINE       0     0     0
            c5t12d0         ONLINE       0     0     0
            c7t12d0         ONLINE       0     0     0
            c3t12d0         ONLINE       0     0     0
            c8t12d0         ONLINE       0     0     0
            c11t12d0        ONLINE       0     0     0
          raidz1            ONLINE       0     0     0
            c9t2d0          ONLINE       0     0     0
            c6t2d0          ONLINE       0     0     0
            replacing       ONLINE       0     0     0
              c11t8d0       ONLINE       0     0     0
              c2t2d0        ONLINE       0     0     0
            c4t2d0          ONLINE       0     0     0
            c10t2d0         ONLINE       0     0     0
        spares
          c6t4d0            INUSE     currently in use
          c3t13d0           AVAIL   
          c7t13d0           AVAIL   
          c11t13d0          INUSE     currently in use

errors: No known data errors
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to