On Sat, May 5, 2018 at 8:44 AM, <[email protected]> wrote:

> I have a pool that broke years ago by mistakingly adding a disk without
> redundancy as a top level vdev.
>
> It looks like this:
> config:
>
>         NAME                                         STATE     READ WRITE
> CKSUM
>         store                                        ONLINE       0
> 0     0
>           raidz2-0                                   ONLINE       0
> 0     0
>             gpt/WD30EFRX-S/N-REDACTED-ZFS            ONLINE       0
> 0     0
>             gpt/WD30EFRX-S/N-REDACTED-ZFS            ONLINE       0
> 0     0
>             gpt/WD30EFRX-S/N-REDACTED-ZFS            ONLINE       0
> 0     0
>             gpt/WD30EFRX-S/N-REDACTED-ZFS            ONLINE       0
> 0     0
>             gpt/ST3000VN0001-REDACTED-ZFS            ONLINE       0
> 0     0
>             gpt/WD30EFRX-S/N-REDACTED-ZFS            ONLINE       0
> 0     0
>             gpt/WD30EFRX-S/N-REDACTED-ZFS            ONLINE       0
> 0     0
>             gpt/WD30EFRX-S/N-REDACTED-ZFS            ONLINE       0
> 0     0
>             gpt/WD30EFRX-S/N-REDACTED-ZFS            ONLINE       0
> 0     0
>             gpt/WD30EFRX-S/N-REDACTED-ZFS            ONLINE       0
> 0     0
>             gpt/WD30EFRX-S/N-REDACTED-ZFS            ONLINE       0
> 0     0
>             gpt/WD30EFRX-S/N-REDACTED-ZFS            ONLINE       0
> 0     0
>             gpt/WD30EFRX-S/N-REDACTED-ZFS            ONLINE       0
> 0     0
>             gpt/WD30EFRX-S/N-REDACTED-ZFS            ONLINE       0
> 0     0
>             gpt/WD30EFRX-S/N-REDACTED-ZFS            ONLINE       0
> 0     0
>             gpt/WD30EFRX-S/N-REDACTED-ZFS            ONLINE       0
> 0     0
>             gpt/WD30EFRX-S/N-REDACTED-ZFS            ONLINE       0
> 0     0
>             gpt/WD30EFRX-S/N-REDACTED-ZFS            ONLINE       0
> 0     0
>             gpt/WD30EFRX-S/N-REDACTED-ZFS            ONLINE       0
> 0     0
>             gpt/WD30EFRX-S/N-REDACTED-ZFS            ONLINE       0
> 0     0
>             gpt/WD30EFRX-S/N-REDACTED-ZFS            ONLINE       0
> 0     0
>             gpt/ST3KNM0025-REDACTED----REDACTED-ZFS  ONLINE       0
> 0     0
>             gpt/WD30EFRX-S/N-REDACTED-ZFS            ONLINE       0
> 0     0
>             gpt/ST3000VN000-REDACTED-ZFS             ONLINE       0
> 0     0
>           mirror-2                                   ONLINE       0
> 0     0
>             gpt/SSD850Pro-S/N----REDACTED-FailZFS    ONLINE       0
> 0     0  block size: 512B configured, 4096B native
>             gpt/SSD850Pro-S/N===-REDACTED-FailZFS    ONLINE       0
> 0     0  block size: 512B configured, 4096B native
>             gpt/SSD850Pro-S/N===-REDACTED-FailZFS    ONLINE       0
> 0     0  block size: 512B configured, 4096B native
>
> errors: No known data errors
>
> Full of hope, I upgraded the FreeBSD only to hit the wall:
> # zpool remove store mirror-2
> cannot remove mirror-2: invalid config; all top-level vdevs must have the
> same sector size and not be raidz.
>
>
> The question is: will I ever be able to fix this, or should I look into
> backup/destroy/create?
>
>
The new device removal feature doesn't work with RAID-Z (there can't be any
RAID-Z vdevs in the pool).  Copying from mirror to RAIDZ (as you want to
do) doesn't fit in to the design of the current feature (at least not if
you want to use less raw space than mirroring).  So I would recommend you
backup and recreate your pool.

FYI, device removal does work with mirrors, including a mix of mirrors and
plain disks (though that configuration isn't recommended due to confusing
redundancy).

--matt

------------------------------------------
openzfs: openzfs-developer
Permalink: 
https://openzfs.topicbox.com/groups/developer/discussions/T81fe6d874ad0e61d-Me0365fc22b6453c6e2aa0f4f
Delivery options: https://openzfs.topicbox.com/groups

Reply via email to