On Wed, Sep 14, 2022 at 7:08 AM wrote:
>
> Hi folks,
>
> my gluster volume isn't fully healing. We had an outage couple days ago
> and all other files got healed successfully. Now - days later - i can
> see there are still two gfid's per node remaining in healing list.
>
> root@storage-001~# for
I think I've been in a similar situation.
"Solved" by creating a new volume on a new set of bricks on the same
disks and moving data to new volume. Then just deleted old volume and
relative bricks. Quite sure there's a better way, but that was
nearly-static data and the move was a faster fix.
Hi,
I Would really appreciate it if someone would be able to help on the above
issue. We are stuck as we cannot run rebalance due to this and thus are not
able to extract peak performance from the setup due to unbalanced data.
Adding gluster info (without the bricks) below. Please let me know if