On 2021-Sep-16, at 13:26, joe mcguckin <joe at via.net> wrote:

> I experienced the same yesterday. I grabbed an old disk that was previously 
> part of a pool. Stuck it in the chassis and did ‘zpool import’ and got the 
> same output you did.

Mine was a single-disk pool. I use zfs just in order to
use bectl, not for redundancy or other such. So my
configuration is very simple.

> Since the other drives of the pool were missing, the pool could not be 
> imported.
> 
> zpool status reports 'everything ok’ because all the existing pools are ok. 
> zpool destroy can’t destroy the pool becuase it has not been imported.

Yea, but the material at the URL it listed just says:

QUOTE
The pool must be destroyed and recreated from an appropriate backup source
END QUOTE

so it says to do something that in my context could not
be done via the normal zfs-related commands as far as I
can tell.

> I simply created a new pool specifying the drive address of the disk - zfs 
> happily overwrote the old incomplete pool info.

Ultimately, I zeroed out an area of the media that
had the zfs related labels and after that things
operated normally and I could recreate the pool in
the partition, send/recieve to it the backup, and
use the restored state. I did not find a way to
use the zpool/zfs related commands to deal with
fixing the messed-up status. (I did not report
everything that I'd tried.)

> joe
> 
> 
> Joe McGuckin
> ViaNet Communications
> 
> j...@via.net
> 650-207-0372 cell
> 650-213-1302 office
> 650-969-2124 fax
> 
> 
> 
>> On Sep 16, 2021, at 1:01 PM, Mark Millard via freebsd-current 
>> <freebsd-current@freebsd.org> wrote:
>> 
>> What do I go about:
>> 
>> QUOTE
>> # zpool import
>>   pool: zopt0
>>     id: 18166787938870325966
>>  state: FAULTED
>> status: One or more devices contains corrupted data.
>> action: The pool cannot be imported due to damaged devices or data.
>>   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E
>> config:
>> 
>>        zopt0       FAULTED  corrupted data
>>          nda0p2    UNAVAIL  corrupted data
>> 
>> # zpool status -x
>> all pools are healthy
>> 
>> # zpool destroy zopt0
>> cannot open 'zopt0': no such pool
>> END QUOTE
>> 
>> (I had attempted to clean out the old zfs context on
>> the media and delete/replace the 2 freebsd swap
>> partitions and 1 freebsd-zfs partition, leaving the
>> efi partition in place. Clearly I did not do everything
>> require [or something is very wrong]. zopt0 had been
>> a root-on-ZFS context and would be again. I have a
>> backup of the context to send/receive once the pool
>> in the partition is established.)
>> 
>> For reference, as things now are:
>> 
>> # gpart show
>> =>       40  937703008  nda0  GPT  (447G)
>>         40     532480     1  efi  (260M)
>>     532520       2008        - free -  (1.0M)
>>     534528  937166848     2  freebsd-zfs  (447G)
>>  937701376       1672        - free -  (836K)
>> . . .
>> 
>> (That is not how it looked before I started.)
>> 
>> # uname -apKU
>> FreeBSD CA72_4c8G_ZFS 13.0-RELEASE-p4 FreeBSD 13.0-RELEASE-p4 #4 
>> releng/13.0-n244760-940681634ee1-dirty: Mon Aug 30 11:35:45 PDT 2021     
>> root@CA72_16Gp_ZFS:/usr/obj/BUILDs/13_0R-CA72-nodbg-clang/usr/13_0R-src/arm64.aarch64/sys/GENERIC-NODBG-CA72
>>   arm64 aarch64 1300139 1300139
>> 
>> I have also tried under:
>> 
>> # uname -apKU
>> FreeBSD CA72_4c8G_ZFS 14.0-CURRENT FreeBSD 14.0-CURRENT #12 
>> main-n249019-0637070b5bca-dirty: Tue Aug 31 02:24:20 PDT 2021     
>> root@CA72_16Gp_ZFS:/usr/obj/BUILDs/main-CA72-nodbg-clang/usr/main-src/arm64.aarch64/sys/GENERIC-NODBG-CA72
>>   arm64 aarch64 1400032 1400032
>> 
>> after reaching this state. It behaves the same.
>> 
>> The text presented by:
>> 
>> https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E
>> 
>> does not deal with what is happening overall.
>> 
> 


===
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)


Reply via email to