Re: [Gluster-users] Volume not healing
Il 19/03/21 18:06, Strahil Nikolov ha scritto: > Are you running it against the fuse mountpoint ? Yup. > You are not supposed to see 'no such file or directory' ... Maybe > something more serious is going on. Between that and the duplicated files,that's for sure. But I don't know where to look to at least diangose (if not fix) this :( As I said, probably part of the issue is due to the multiple failures for OOM and the multiple tries to remove a brick. I'm currently emptying the volume then I'll recreate it from scratch, hoping for the best. -- Diego Zuccato DIFA - Dip. di Fisica e Astronomia Servizi Informatici Alma Mater Studiorum - Università di Bologna V.le Berti-Pichat 6/2 - 40127 Bologna - Italy tel.: +39 051 20 95786 Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Volume not healing
Il 20/03/21 15:21, Zenon Panoussis ha scritto: > When you have 0 files that need healing, > gluster volume heal BigVol granular-entry-heal enable > I have tested with and without granular and, empirically, > without any hard statistics, I find granular considerably > faster. Tks for the hint, but it's already set. I usually do it as soon as I create the volume :) I don't understand why it's not the default :) -- Diego Zuccato DIFA - Dip. di Fisica e Astronomia Servizi Informatici Alma Mater Studiorum - Università di Bologna V.le Berti-Pichat 6/2 - 40127 Bologna - Italy tel.: +39 051 20 95786 Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Volume not healing
> Is it possible to speed it up? Nodes are nearly idle... When you have 0 files that need healing, gluster volume heal BigVol granular-entry-heal enable I have tested with and without granular and, empirically, without any hard statistics, I find granular considerably faster. Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Volume not healing
Il 19/03/21 13:17, Strahil Nikolov ha scritto: > find /FUSE/mountpoint -exec stat {} \; Running it now (redirecting stdout to /dev/null). It's finding quite a lot of "no such file or directory" errors. -- Diego Zuccato DIFA - Dip. di Fisica e Astronomia Servizi Informatici Alma Mater Studiorum - Università di Bologna V.le Berti-Pichat 6/2 - 40127 Bologna - Italy tel.: +39 051 20 95786 Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Volume not healing
Il 19/03/21 11:06, Diego Zuccato ha scritto: > I tried to run "gluster v heal BigVol info summary" and got quite a high > count of entries to be healed on some bricks: > # gluster v heal BigVol info summary|grep pending|grep -v ' 0$' > Number of entries in heal pending: 41 > Number of entries in heal pending: 2971 > Number of entries in heal pending: 20 > Number of entries in heal pending: 2393 > > Too bad that those numbers aren't decreasing with time. Slight correction. Seems the numbers are *slowly* decreasing. After one hour I see: # gluster v heal BigVol info summary|grep pending|grep -v ' 0$' Number of entries in heal pending: 41 Number of entries in heal pending: 2955 Number of entries in heal pending: 20 Number of entries in heal pending: 2384 Is it possible to speed it up? Nodes are nearly idle... -- Diego Zuccato DIFA - Dip. di Fisica e Astronomia Servizi Informatici Alma Mater Studiorum - Università di Bologna V.le Berti-Pichat 6/2 - 40127 Bologna - Italy tel.: +39 051 20 95786 Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] Volume not healing
Hello all. I have a "problematic" volume. It was Rep3a1 with a dedicated VM for the arbiters. Too bad I understimated RAM needs and the arbiters VM crashed frequently for OOM (had just 8GB allocated). Even the other two nodes sometimes crashed, too, during a remove-brick operation (other thread). So I've had to stop & re-run the remove-brick multiple times, even rebooting the nodes, but it never completed. Now, I decided to move all the files to a temporary storage to rebuild the volume from scratch, but I find directories with duplicated files (two identical files, same name, size and contents), probably the two replicas. I tried to run "gluster v heal BigVol info summary" and got quite a high count of entries to be healed on some bricks: # gluster v heal BigVol info summary|grep pending|grep -v ' 0$' Number of entries in heal pending: 41 Number of entries in heal pending: 2971 Number of entries in heal pending: 20 Number of entries in heal pending: 2393 Too bad that those numbers aren't decreasing with time. Seems no entries are considered in split-brain condition (all counts for "gluster v heal BigVol info split-brain" are 0). Is there something I can do to convince Gluster to heal those entries w/o going entry-by-entry manually? Thanks. -- Diego Zuccato DIFA - Dip. di Fisica e Astronomia Servizi Informatici Alma Mater Studiorum - Università di Bologna V.le Berti-Pichat 6/2 - 40127 Bologna - Italy tel.: +39 051 20 95786 Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users