Thanks Deepu.
I will investigate this can you just summarize the steps which can be
helpful in reproducing this issue.
/sunny
On Fri, Nov 29, 2019 at 7:29 AM deepu srinivasan wrote:
>
> Hi Sunny
> The issue seems to be a bug.
> The issue got fixed when I restarted the glusterd daemon in the
Hi Ashish,
thanks for your reply. To fulfill the "no IO"-requirement, I'll have to wait
until second week of December (9th – 14th).
We originally planned to update GlusterFS from 4.1.7 to 5 and then to 6 in
December. Should we do that upgrade before or after running those scripts?
Kind
I'm trying to manually garbage data on bricks (when the volume is
stopped) and then check whether healing is possible. For example:
Start:
# glusterd --debug
Bricks (on EXT4 mounted with 'rw,realtime'):
# mkdir /root/data0
# mkdir /root/data1
# mkdir /root/data2
Volume:
# gluster volume