simple working solution for such cases is rebuilding volume with .glusterfs
recover dead node, create fresh bricks, copy files there and then attr them
to regenerate .glusterfs
something like mentioned here
https://lists.gluster.org/pipermail/gluster-users/2018-January/033352.html
On Mon, Oct
Hi everyone,
I have a serious issue with one of my peer on a mirrored nodes with an arbiter.
Both nodes
operates a ZFS filesystem where GlusterFS is on top of it.
Because of a malfunctioning controller one of the ZFS filesystem tell me my
entire pool is corrupted and I need to destroy then