Hello,
On 25.02.2022 18:30, Steven Hartland wrote:
Have you tried removing the dead disk physically. I've seen in the
past a bad disk sending causing bad data to be sent to the controller
causing knock on issues.
Yup, I did. I've even built 13.0 and tried to import it there. 13.0
complains dirrerently, but still refuses to import:
# zpool import
pool: data
id: 15967028801499953224
state: ONLINE
status: One or more devices contains corrupted data.
action: The pool can be imported using its name or numeric identifier.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
config:
data ONLINE
nvd0 UNAVAIL corrupted data
nvd1 ONLINE
And while importing:
# zpool import -FX data
cannot import 'data': one or more devices is currently unavailable
and I see the following in dmesg:
Feb 25 16:44:41 db0 ZFS[4857]: failed to load zpool data
Feb 25 16:44:41 db0 ZFS[4873]: failed to load zpool data
Feb 25 16:44:41 db0 ZFS[4889]: failed to load zpool data
Feb 25 16:44:41 db0 ZFS[4909]: failed to load zpool data
Feb 25 16:45:13 db0 ZFS[4940]: pool log replay failure, zpool=data
Feb 25 16:45:13 db0 ZFS[4952]: pool log replay failure, zpool=data
Feb 25 16:45:13 db0 ZFS[4964]: pool log replay failure, zpool=data
Feb 25 16:45:13 db0 ZFS[4976]: pool log replay failure, zpool=data
Also the output doesn't show multiple devices, only nvd0. I'm hoping
you didn't use nv raid to create the mirror, as that means there's no
ZFS protection?
Nope, I'm aware of that. Acrually, the redundant drive is still there,
but dead already, it's the FAULTED device 9566965891719887395 in my
quotes below.
[root@db0:~]# zpool import
pool: data
id: 15967028801499953224
state: FAULTED
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on another system, but can be imported using
the '-f' flag.
see: http://illumos.org/msg/ZFS-8000-5E
config:
data FAULTED corrupted data
9566965891719887395 FAULTED corrupted data
nvd0 ONLINE
Thanks.
Eugene.