[Gluster-users] File\Directory not healing

2023-02-07 Thread David Dolan
Hi All.

Hoping you can help me with a healing problem. I have one file which didn't
self heal.
it looks to be a problem with a directory in the path as one node says it's
dirty. I have a replica volume with arbiter
This is what the 3 nodes say. One brick on each

Node1
getfattr -d -m . -e hex /path/to/dir | grep afr
getfattr: Removing leading '/' from absolute path names
trusted.afr.volume-client-2=0x0001
trusted.afr.dirty=0x

Node2
getfattr -d -m . -e hex /path/to/dir | grep afr
getfattr: Removing leading '/' from absolute path names
trusted.afr.volume-client-2=0x0001
trusted.afr.dirty=0x

Node3(Arbiter)
getfattr -d -m . -e hex /path/to/dir | grep afr
getfattr: Removing leading '/' from absolute path names
trusted.afr.dirty=0x0001

Since Node3(the arbiter) sees it as dirty and it looks like Node 1 and Node
2 have good copies, I was thinking of running the following on Node1 which
I believe would tell Node 2 and Node 3 to sync from Node 1
I'd then kick off a heal on the volume

setfattr -n trusted.afr.volume-client-1 -v 0x0001
/path/to/dir
setfattr -n trusted.afr.volume-client-2 -v 0x0001
/path/to/dir

client-0 is node 1, client-1 is node2 and client-2 is node 3. I've verified
the hard links with gfid are in the xattrop directory
Is this the correct way to heal and resolve the issue?

Thanks
David




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Quick way to fix stale gfids?

2023-02-07 Thread Diego Zuccato
The contents do not match exactly, but the only difference is the 
"option shared-brick-count" line that sometimes is 0 and sometimes 1.


The command you gave could be useful for the files that still needs 
healing with the source still present, but the files related to the 
stale gfids have been deleted, so "find -samefile" won't find anything.


For the other files reported by heal info, I saved the output to 
'healinfo', then:
  for T in $(grep '^/' healinfo |sort|uniq); do stat /mnt/scratch$T > 
/dev/null; done


but I still see a lot of 'Transport endpoint is not connected' and 
'Stale file handle' errors :( And many 'No such file or directory'...


I don't understand the first two errors, since /mnt/scratch have been 
freshly mounted after enabling client healing, and gluster v info does 
not highlight unconnected/down bricks.


Diego

Il 06/02/2023 22:46, Strahil Nikolov ha scritto:
I'm not sure if the md5sum has to match , but at least the content 
should do.
In modern versions of GlusterFS the client side healing is disabled , 
but it's worth trying.
You will need to enable cluster.metadata-self-heal, 
cluster.data-self-heal and cluster.entry-self-heal and then create a 
small one-liner that identifies the names of the files/dirs from the 
volume heal ,so you can stat them through the FUSE.


Something like this:


for i in $(gluster volume heal  info | awk -F '' '/gfid:/ 
{print $2}'); do find /PATH/TO/BRICK/ -samefile 
/PATH/TO/BRICK/.glusterfs/${i:0:2}/${i:2:2}/$i | awk '!/.glusterfs/ 
{gsub("/PATH/TO/BRICK", "stat /MY/FUSE/MOUNTPOINT", $0); print $0}' ; done


Then Just copy paste the output and you will trigger the client side 
heal only on the affected gfids.


Best Regards,
Strahil Nikolov
В понеделник, 6 февруари 2023 г., 10:19:02 ч. Гринуич+2, Diego Zuccato 
 написа:



Ops... Reincluding the list that got excluded in my previous answer :(

I generated md5sums of all files in vols/ on clustor02 and compared to
the other nodes (clustor00 and clustor01).
There are differences in volfiles (shouldn't it always be 1, since every
data brick is on its own fs? quorum bricks, OTOH, share a single
partition on SSD and should always be 15, but in both cases sometimes
it's 0).

I nearly got a stroke when I saw diff output for 'info' files, but once
I sorted 'em their contents matched. Pfhew!

Diego

Il 03/02/2023 19:01, Strahil Nikolov ha scritto:
 > This one doesn't look good:
 >
 >
 > [2023-02-03 07:45:46.896924 +] E [MSGID: 114079]
 > [client-handshake.c:1253:client_query_portmap] 0-cluster_data-client-48:
 > remote-subvolume not set in volfile []
 >
 >
 > Can you compare all vol files in /var/lib/glusterd/vols/ between the 
nodes ?

 > I have the suspicioun that there is a vol file mismatch (maybe
 > /var/lib/glusterd/vols//*-shd.vol).
 >
 > Best Regards,
 > Strahil Nikolov
 >
 >    On Fri, Feb 3, 2023 at 12:20, Diego Zuccato
 >    mailto:diego.zucc...@unibo.it>> wrote:
 >    Can't see anything relevant in glfsheal log, just messages related to
 >    the crash of one of the nodes (the one that had the mobo replaced... I
 >    fear some on-disk structures could have been silently damaged by RAM
 >    errors and that makes gluster processes crash, or it's just an issue
 >    with enabling brick-multiplex).
 >    -8<--
 >    [2023-02-03 07:45:46.896924 +] E [MSGID: 114079]
 >    [client-handshake.c:1253:client_query_portmap]
 >    0-cluster_data-client-48:
 >    remote-subvolume not set in volfile []
 >    [2023-02-03 07:45:46.897282 +] E
 >    [rpc-clnt.c:331:saved_frames_unwind] (-->
 >
/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x195)[0x7fce0c867b95]

 >    (--> /lib/x86_64-linux-gnu/libgfrpc.so.0(+0x72fc)[0x7fce0c0ca2fc] (-->
 >
/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x109)[0x7fce0c0d2419]
 >    (--> /lib/x86_64-linux-gnu/libgfrpc.so.0(+0x10308)[0x7fce0c0d3308] 
(-->
 >
/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_transport_notify+0x26)[0x7fce0c0ce7e6]

 >    ) 0-cluster_data-client-48: forced unwinding frame type(GF-DUMP)
 >    op(NULL(2)) called at 2023-02-03 07:45:46.891054 + (xid=0x13)
 >    -8<--
 >
 >    Well, actually I *KNOW* the files outside .glusterfs have been deleted
 >    (by me :) ). That's why I call those 'stale' gfids.
 >    Affected entries under .glusterfs have usually link count = 1 =>
 >    nothing
 >    'find' can find.
 >    Since I already recovered those files (before deleting from bricks),
 >    can
 >    .glusterfs entries be deleted too or should I check something else?
 >    Maybe I should create a script that finds all files/dirs (not 
symlinks,
 >    IIUC) in .glusterfs on all bricks/arbiters and moves 'em to a temp 
dir?

 >
 >    Diego
 >
 >    Il 02/02/2023 23:35, Strahil Nikolov ha scritto:
 >      > Any issues reported in /var/log/glusterfs/glfsheal-*.log ?
 >      >
 >      > The easiest way to identify the affected entries is to run:
 >      > find /FULL/PATH/TO/BRICK/ -samefile
 >      >
 

Re: [Gluster-users] GlusterFS 11 is out!

2023-02-07 Thread Gilberto Ferreira
Here we go

https://download.gluster.org/pub/gluster/glusterfs/11/
---
Gilberto Nunes Ferreira






Em ter., 7 de fev. de 2023 às 17:05, sacawulu 
escreveu:

> But *is* it out?
>
> I don't see it anywhere...
>
> MJ
>
>
> Op 07-02-2023 om 18:07 schreef Gilberto Ferreira:
> > Hello guys!
> > So what is the good news about this new release?
> > Is Anybody are using it?
> >
> > Thanks for any feedback!
> >
> >
> > ---
> > Gilberto Nunes Ferreira
> >
> >
> >
> >
> >
> > 
> >
> >
> >
> > Community Meeting Calendar:
> >
> > Schedule -
> > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> > Bridge: https://meet.google.com/cpu-eiue-hvk
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] GlusterFS 11 is out!

2023-02-07 Thread Gilberto Ferreira
Hello guys!
So what is the good news about this new release?
Is Anybody are using it?

Thanks for any feedback!


---
Gilberto Nunes Ferreira




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users