Hi David, Doing full heal after deleting the gfid entries (and the bad copy) is fine. It is not dangerous.
Regards, Raghavendra On Mon, Mar 4, 2019 at 9:44 AM David Spisla <[email protected]> wrote: > Hello Gluster Community, > > I have questions and notes concerning the steps mentioned in > https://github.com/gluster/glusterfs/issues/491 > > " *2. Delete the corrupted files* ": > In my experience there are two GFID files if a copy gets corrupted. > Example: > > > > *$ find /gluster/brick1/glusterbrick/.glusterfs -name > fc36e347-53c7-4a0a-8150-c070143d3b34/gluster/brick1/glusterbrick/.glusterfs/quarantine/fc36e347-53c7-4a0a-8150-c070143d3b34/gluster/brick1/glusterbrick/.glusterfs/fc/36/fc36e347-53c7-4a0a-8150-c070143d3b34* > > Both GFID files has to be deleted. If a copy is NOT corrupted, there seems > to be no GFID file in > *.glusterfs/quarantine . *Even one executes scub ondemand, the file is > not there. The file in *.glusterfs/quarantine* occurs if one executes > "scrub status". > > " *3. Restore the file* ": > One alternatively trigger self heal manually with > *gluster vo heal VOLNAME* > But in my experience this is not working. One have to trigger a full heal: > *gluster vo heal VOLNAME* *full* > > Imagine, one will restore a copy with manual self heal. It is neccesary to > set some VOLUME options (stat-prefetch, dht.force-readdirp and > performance.force-readdirp disabled) and mount via FUSE with some special > parameters to heal the file? > In my experience I do only a full heal after deleting the bad copy and the > GFID files. > This seems to be working. Or it is dangerous? > > Regards > David Spisla > > > > _______________________________________________ > Gluster-users mailing list > [email protected] > https://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list [email protected] https://lists.gluster.org/mailman/listinfo/gluster-users
