Re: [Gluster-users] Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica

2018-11-15 Thread Ravishankar N



On 11/15/2018 09:17 PM, mabi wrote:

‐‐‐ Original Message ‐‐‐
On Thursday, November 15, 2018 1:41 PM, Ravishankar N  
wrote:


Thanks, noted. One more query. Are there files inside each of these
directories? Or is it just empty directories?

You will find below the content of each of these 3 directories taken the brick 
on node 1:

i)/data/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10

drwxr-xr-x  4 www-data www-data  4 Nov  5 14:19 .
drwxr-xr-x 31 www-data www-data 31 Nov  5 14:23 ..
drwxr-xr-x  3 www-data www-data  3 Nov  5 14:19 dir11
drwxr-xr-x  3 www-data www-data  3 Nov  5 14:19 another_dir

ii)/data/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/
drwxr-xr-x 3 www-data www-data 3 Nov  5 14:19 .
drwxr-xr-x 4 www-data www-data 4 Nov  5 14:19 ..
drwxr-xr-x 2 www-data www-data 4 Nov  5 14:19 oc_dir

iii)/data/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/oc_dir

drwxr-xr-x 2 www-data www-data   4 Nov  5 14:19 .
drwxr-xr-x 3 www-data www-data   3 Nov  5 14:19 ..
-rw-r--r-- 2 www-data www-data  32 Nov  5 14:19 fileKey
-rw-r--r-- 2 www-data www-data 512 Nov  5 14:19 username.shareKey

so as you see from the output above only the "oc_dir" directory has two files 
inside.

Okay, I'm assuming the list of files+dirs are the same on nodes 2 and 3 
as well. Correct me if that isn't the case.

symlinks are only for dirs. For files, they would be hard links to the
actual files. So if stat
../brick/.glusterfs/aa/e4/aae4098a-1a71-4155-9cc9-e564b89957cf gives you
a file, then you can use find -samefile to get the other hardlinks like so:
#cd /brick/.glusterfs/aa/e4/aae4098a-1a71-4155-9cc9-e564b89957cf
#find /brick -samefile aae4098a-1a71-4155-9cc9-e564b89957cf

If it is a hardlink, then you can do a getfattr on
/brick/.glusterfs/aa/e4/aae4098a-1a71-4155-9cc9-e564b89957cf itself.
-Ravi

Thank you for explaining this important part. So yes with your help I could find the 
filenames associated to these 2 GFIDs and guess what? they are the 2 files which listed 
in the output above of the "oc_dir" directory. Have a look at this:

# find /data/myvol-pro/brick -samefile aae4098a-1a71-4155-9cc9-e564b89957cf
/data/myvol-pro/brick/.glusterfs/aa/e4/aae4098a-1a71-4155-9cc9-e564b89957cf
/data/myvol-pro/brick/data/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/oc_dir/fileKey

# find /data/myvol-pro/brick -samefile 3c92459b-8fa1-4669-9a3d-b38b8d41c360
/data/myvol-pro/brick/.glusterfs/3c/92/3c92459b-8fa1-4669-9a3d-b38b8d41c360
/data/myvol-pro/brick/data/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/oc_dir/username.shareKey
Okay, as asked in the previous mail, please share the getfattr output 
from all bricks for these 2 files. I think once we have this, we can try 
either 'adjusting' the the gfid and symlinks on node 2 for dir11 and 
oc_dir or see if we can set afr xattrs on dir10 for self-heal to purge 
everything under it on node 2 and recreate it using the other 2 nodes.

-Ravi


I hope that helps the debug further else let me know if you need anything else.


___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] GCS milestone 0.3

2018-11-15 Thread John Strunk
Today, we are announcing the availability of GCS (Gluster Container
Storage) 0.3. GCS is following a release cadence of 2 weeks for all the
point releases leading up to 1.0. This enables developers and users to get
an overall experience of the latest changes on a more frequent basis. From
GCS 1.0 onward, we’ll re-evaluate the release cadance and update the list.

Highlights and updates since v0.2:
- Initial Prometheus and Grafana integration
- Glusterd2 container includes gluster-prometheus to export metrics
- CSI driver now supports snapshots
- CSI driver is pulled from a different location (that contains auto-built
images)

Included components:
- Glusterd2: https://github.com/gluster/glusterd2
- Gluster CSI driver: https://github.com/gluster/gluster-csi-driver
- Gluster-prometheus: https://github.com/gluster/gluster-prometheus

To get started with this snapshot, please see the releases page [1] and the
deploy instructions [2].

If you are interested in contributing, please see [3] or contact the
gluster-devel mailing list.

Regards,
Team GCS

[1] https://github.com/gluster/gcs/releases
[2] https://github.com/gluster/gcs/tree/master/deploy
[3] https://github.com/gluster/gcs
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica

2018-11-15 Thread mabi
‐‐‐ Original Message ‐‐‐
On Thursday, November 15, 2018 1:41 PM, Ravishankar N  
wrote:

> Thanks, noted. One more query. Are there files inside each of these
> directories? Or is it just empty directories?

You will find below the content of each of these 3 directories taken the brick 
on node 1:

i)/data/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10

drwxr-xr-x  4 www-data www-data  4 Nov  5 14:19 .
drwxr-xr-x 31 www-data www-data 31 Nov  5 14:23 ..
drwxr-xr-x  3 www-data www-data  3 Nov  5 14:19 dir11
drwxr-xr-x  3 www-data www-data  3 Nov  5 14:19 another_dir

ii)/data/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/
drwxr-xr-x 3 www-data www-data 3 Nov  5 14:19 .
drwxr-xr-x 4 www-data www-data 4 Nov  5 14:19 ..
drwxr-xr-x 2 www-data www-data 4 Nov  5 14:19 oc_dir

iii)/data/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/oc_dir

drwxr-xr-x 2 www-data www-data   4 Nov  5 14:19 .
drwxr-xr-x 3 www-data www-data   3 Nov  5 14:19 ..
-rw-r--r-- 2 www-data www-data  32 Nov  5 14:19 fileKey
-rw-r--r-- 2 www-data www-data 512 Nov  5 14:19 username.shareKey

so as you see from the output above only the "oc_dir" directory has two files 
inside.


> symlinks are only for dirs. For files, they would be hard links to the
> actual files. So if stat
> ../brick/.glusterfs/aa/e4/aae4098a-1a71-4155-9cc9-e564b89957cf gives you
> a file, then you can use find -samefile to get the other hardlinks like so:
> #cd /brick/.glusterfs/aa/e4/aae4098a-1a71-4155-9cc9-e564b89957cf
> #find /brick -samefile aae4098a-1a71-4155-9cc9-e564b89957cf
>
> If it is a hardlink, then you can do a getfattr on
> /brick/.glusterfs/aa/e4/aae4098a-1a71-4155-9cc9-e564b89957cf itself.
> -Ravi

Thank you for explaining this important part. So yes with your help I could 
find the filenames associated to these 2 GFIDs and guess what? they are the 2 
files which listed in the output above of the "oc_dir" directory. Have a look 
at this:

# find /data/myvol-pro/brick -samefile aae4098a-1a71-4155-9cc9-e564b89957cf
/data/myvol-pro/brick/.glusterfs/aa/e4/aae4098a-1a71-4155-9cc9-e564b89957cf
/data/myvol-pro/brick/data/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/oc_dir/fileKey

# find /data/myvol-pro/brick -samefile 3c92459b-8fa1-4669-9a3d-b38b8d41c360
/data/myvol-pro/brick/.glusterfs/3c/92/3c92459b-8fa1-4669-9a3d-b38b8d41c360
/data/myvol-pro/brick/data/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/oc_dir/username.shareKey

I hope that helps the debug further else let me know if you need anything else.
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] does your samba work with 4.1.x (centos 7.5)

2018-11-15 Thread Diego Remolina
I will try to do the wireshark captures within a week from now.

Up until recently, problems were only with Revit central files, but we
lost a server and ran on one (out of two) for a few days until the
motherboard arrived and we started seeing problems with other files
while writing them. At that point I decided to switch everything to
FUSE mounts to get rid of vfs objects = glusterfs in all shares. At
that point I also downgraded one samba version (running 4.7.1-6.el7_5)
since I had recently also upgraded to the latest CentOS had published
(4.7.1-9.el7_5) at the time before seeing more errors.

We are also working on getting ovirt and self-hosted engine out of
these servers, so once that happens, I can even upgrade to the latest
glusterfs 4.1.5 and do tests on a more current glusterfs version.

Someone at the office who has contacts with another IT person who
worked at a large Architectural firm said that IT person also had
mentioned that gluster did not work with Revit as they tested it an
failed. I have no other details than this.

Could you also try to reproduce it yourself? May help to allow you to
capture all the information you need. A free trial of Revit is
available at:

https://www.autodesk.com/products/revit/free-trial

You can just use one of the demo files that Revit comes with and
convert it to a central while stored in a gluster volume when using
vfs objects = glusterfs in samba and that should trigger the problems.

Diego
On Thu, Nov 15, 2018 at 8:04 AM Anoop C S  wrote:
>
> On Wed, 2018-11-14 at 22:19 -0500, Diego Remolina wrote:
> > Hi,
> >
> > Please download the logs from:
> >
> > https://www.dropbox.com/s/4k0zvmn4izhjtg7/samba-logs.tar.bz2?dl=0
>
> [2018/11/14 22:01:31.974084, 10, pid=7577, effective(1009, 513), real(1009, 
> 0)]
> ../source3/smbd/smb2_server.c:2279(smbd_smb2_request_dis
> patch)
>   smbd_smb2_request_dispatch: opcode[SMB2_OP_FLUSH] mid = 9303
> [2018/11/14 22:01:31.974123,  4, pid=7577, effective(1009, 513), real(1009, 
> 0)]
> ../source3/smbd/uid.c:384(change_to_user)
>   Skipping user change - already user
> [2018/11/14 22:01:31.974163, 10, pid=7577, effective(1009, 513), real(1009, 
> 0)]
> ../source3/smbd/smb2_flush.c:134(smbd_smb2_flush_send)
>   smbd_smb2_flush: Test-Project_backup/preview.1957.dat - fnum 2399596398
>
> I see the above flush request(SMB2_OP_FLUSH) without a response being logged 
> compared to other
> request-response pairs for SMB2_OP_QUERY_DIRECTORY, SMB2_OP_IOCTL, 
> SMB2_OP_WRITE, SMB2_OP_READ which
> brings me to the following bug:
>
> https://bugzilla.samba.org/show_bug.cgi?id=13297
>
> Is it also possible for you to collect network traces using wireshark from 
> Windows client to samba
> server along with corresponding samba logs?
>
> > These options had to be set in the [global] section:
> > kernel change notify = no
> > kernel oplocks = no
>
> Well, 'kernel oplocks' and 'posix locking' are listed (S) in man smb.conf(5) 
> which can also be used
> in [global] section meaning it affects all services. Anyway this should be 
> fine.
>
> > I also set log level = 10
>
> Good one.
>
> > I renamed the file as Test-Project.rvt for simplicity I opened revit
> > and from revit I opened this file. I then started attempting to save
> > it as a central at around 22:01:30s The file save then got stuck and
> > at around 22:05 it finally failed saying that two dat files did not
> > exist.
>
> Do you experience problems with other applications and/or with other file 
> types? Is this specific to
> Revit?
>
> > Diego
> > On Tue, Nov 13, 2018 at 8:46 AM Anoop C S  wrote:
> > >
> > > On Tue, 2018-11-13 at 07:50 -0500, Diego Remolina wrote:
> > > > >
> > > > > Thanks for explaining the issue.
> > > > >
> > > > > I understand that you are experiencing hang while doing some 
> > > > > operations on files/directories
> > > > > in
> > > > > a
> > > > > GlusterFS volume share from a Windows client. For simplicity can you 
> > > > > attach the output of
> > > > > following
> > > > > command:
> > > > >
> > > > > # gluster volume info 
> > > > > # testparm -s --section-name global
> > > >
> > > > gluster v status export
> > > > Status of volume: export
> > > > Gluster process TCP Port  RDMA Port  Online 
> > > >  Pid
> > > > --
> > > > Brick 10.0.1.7:/bricks/hdds/brick   49153 0  Y  
> > > >  2540
> > > > Brick 10.0.1.6:/bricks/hdds/brick   49153 0  Y  
> > > >  2800
> > > > Self-heal Daemon on localhost   N/A   N/AY  
> > > >  2912
> > > > Self-heal Daemon on 10.0.1.6N/A   N/AY  
> > > >  3107
> > > > Self-heal Daemon on 10.0.1.5N/A   N/AY  
> > > >  5877
> > > >
> > > > Task Status of Volume export
> > > > --
> > > > There are no active volume 

[Gluster-users] Geo Replication / Error: bash: gluster: command not found

2018-11-15 Thread Christos Tsalidis
Hi all,

I encounter a problem to set up my geo replication session in glusterfs
4.1.5 on centos 7.5.1804.

After I give the
gluster volume geo-replication mastervol geoaccount@servere::slavevol
create push-pem

I see the following

gluster command on geoaccount@servere failed. Error: bash: gluster: command
not found
geo-replication command failed

Do you know where is the problem?

Thanks in advance!

BR
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Is it recommended for Glustereventsd be running on all nodes?

2018-11-15 Thread Jeevan Patnaik
Hi,

And Gluster version is 3.12.5.

Regards,
Jeevan.

On Thu, Nov 15, 2018, 7:40 PM Jeevan Patnaik  Hi All,
>
> I have implemented a webhook and attached to glustereventsd to listen to
> events and to send alerts on critical events.
>
> So, I categorized events manually critical, informational and warning.
>
> We are interested in only events that can cause issue to end users like
> BRICK_DISCONNECTED (reducing redundancy of the volume), QUORUM_LOST
> (possible downtime of subvoume), QUOTA_CROSSES_SOFTLIMIT, AFR_SPLIT_BRAIN
> etc. and not included any other events that are resulted while some one
> does admin tasks like PEER_ATTACH.
>
> And I see atleast some events are local to the node like PEER_ATTACH  and
> doesn't appear from other gluster node.
>
> My idea is to run glustereventsd service only on a gluster admin node, to
> avoid possible load on the storage serving nodes due to traffic caused by
> webhook events.
>
> So, my question is are there any events local to node, which will be
> missed in admin node but are fatal to end users, assuming that the admin
> node will always be running?
>
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Is it recommended for Glustereventsd be running on all nodes?

2018-11-15 Thread Jeevan Patnaik
Hi All,

I have implemented a webhook and attached to glustereventsd to listen to
events and to send alerts on critical events.

So, I categorized events manually critical, informational and warning.

We are interested in only events that can cause issue to end users like
BRICK_DISCONNECTED (reducing redundancy of the volume), QUORUM_LOST
(possible downtime of subvoume), QUOTA_CROSSES_SOFTLIMIT, AFR_SPLIT_BRAIN
etc. and not included any other events that are resulted while some one
does admin tasks like PEER_ATTACH.

And I see atleast some events are local to the node like PEER_ATTACH  and
doesn't appear from other gluster node.

My idea is to run glustereventsd service only on a gluster admin node, to
avoid possible load on the storage serving nodes due to traffic caused by
webhook events.

So, my question is are there any events local to node, which will be missed
in admin node but are fatal to end users, assuming that the admin node will
always be running?
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] does your samba work with 4.1.x (centos 7.5)

2018-11-15 Thread Anoop C S
On Wed, 2018-11-14 at 22:19 -0500, Diego Remolina wrote:
> Hi,
> 
> Please download the logs from:
> 
> https://www.dropbox.com/s/4k0zvmn4izhjtg7/samba-logs.tar.bz2?dl=0

[2018/11/14 22:01:31.974084, 10, pid=7577, effective(1009, 513), real(1009, 0)]
../source3/smbd/smb2_server.c:2279(smbd_smb2_request_dis
patch)
  smbd_smb2_request_dispatch: opcode[SMB2_OP_FLUSH] mid = 9303
[2018/11/14 22:01:31.974123,  4, pid=7577, effective(1009, 513), real(1009, 0)]
../source3/smbd/uid.c:384(change_to_user)
  Skipping user change - already user
[2018/11/14 22:01:31.974163, 10, pid=7577, effective(1009, 513), real(1009, 0)]
../source3/smbd/smb2_flush.c:134(smbd_smb2_flush_send)
  smbd_smb2_flush: Test-Project_backup/preview.1957.dat - fnum 2399596398

I see the above flush request(SMB2_OP_FLUSH) without a response being logged 
compared to other
request-response pairs for SMB2_OP_QUERY_DIRECTORY, SMB2_OP_IOCTL, 
SMB2_OP_WRITE, SMB2_OP_READ which
brings me to the following bug:

https://bugzilla.samba.org/show_bug.cgi?id=13297

Is it also possible for you to collect network traces using wireshark from 
Windows client to samba
server along with corresponding samba logs?

> These options had to be set in the [global] section:
> kernel change notify = no
> kernel oplocks = no

Well, 'kernel oplocks' and 'posix locking' are listed (S) in man smb.conf(5) 
which can also be used
in [global] section meaning it affects all services. Anyway this should be fine.

> I also set log level = 10

Good one.

> I renamed the file as Test-Project.rvt for simplicity I opened revit
> and from revit I opened this file. I then started attempting to save
> it as a central at around 22:01:30s The file save then got stuck and
> at around 22:05 it finally failed saying that two dat files did not
> exist.

Do you experience problems with other applications and/or with other file 
types? Is this specific to
Revit?

> Diego
> On Tue, Nov 13, 2018 at 8:46 AM Anoop C S  wrote:
> > 
> > On Tue, 2018-11-13 at 07:50 -0500, Diego Remolina wrote:
> > > > 
> > > > Thanks for explaining the issue.
> > > > 
> > > > I understand that you are experiencing hang while doing some operations 
> > > > on files/directories
> > > > in
> > > > a
> > > > GlusterFS volume share from a Windows client. For simplicity can you 
> > > > attach the output of
> > > > following
> > > > command:
> > > > 
> > > > # gluster volume info 
> > > > # testparm -s --section-name global
> > > 
> > > gluster v status export
> > > Status of volume: export
> > > Gluster process TCP Port  RDMA Port  Online  
> > > Pid
> > > --
> > > Brick 10.0.1.7:/bricks/hdds/brick   49153 0  Y   
> > > 2540
> > > Brick 10.0.1.6:/bricks/hdds/brick   49153 0  Y   
> > > 2800
> > > Self-heal Daemon on localhost   N/A   N/AY   
> > > 2912
> > > Self-heal Daemon on 10.0.1.6N/A   N/AY   
> > > 3107
> > > Self-heal Daemon on 10.0.1.5N/A   N/AY   
> > > 5877
> > > 
> > > Task Status of Volume export
> > > --
> > > There are no active volume tasks
> > > 
> > > # gluster volume info export
> > > 
> > > Volume Name: export
> > > Type: Replicate
> > > Volume ID: b4353b3f-6ef6-4813-819a-8e85e5a95cff
> > > Status: Started
> > > Snapshot Count: 0
> > > Number of Bricks: 1 x 2 = 2
> > > Transport-type: tcp
> > > Bricks:
> > > Brick1: 10.0.1.7:/bricks/hdds/brick
> > > Brick2: 10.0.1.6:/bricks/hdds/brick
> > > Options Reconfigured:
> > > diagnostics.brick-log-level: INFO
> > > diagnostics.client-log-level: INFO
> > > performance.cache-max-file-size: 256MB
> > > client.event-threads: 5
> > > server.event-threads: 5
> > > cluster.readdir-optimize: on
> > > cluster.lookup-optimize: on
> > > performance.io-cache: on
> > > performance.io-thread-count: 64
> > > nfs.disable: on
> > > cluster.server-quorum-type: server
> > > performance.cache-size: 10GB
> > > server.allow-insecure: on
> > > transport.address-family: inet
> > > performance.cache-samba-metadata: on
> > > features.cache-invalidation-timeout: 600
> > > performance.md-cache-timeout: 600
> > > features.cache-invalidation: on
> > > performance.cache-invalidation: on
> > > network.inode-lru-limit: 65536
> > > performance.cache-min-file-size: 0
> > > performance.stat-prefetch: on
> > > cluster.server-quorum-ratio: 51%
> > > 
> > > I had sent you the full smb.conf, so no need to run testparm -s
> > > --section-name global, please reference:
> > > http://termbin.com/y4j0
> > 
> > Fine.
> > 
> > > > 
> > > > > This is the test samba share exported using vfs object = glusterfs:
> > > > > 
> > > > > [vfsgluster]
> > > > >path = /vfsgluster
> > > > >browseable = yes
> > > > >create mask = 660
> > > > >directory mask = 770
> > > > >write list = 

Re: [Gluster-users] Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica

2018-11-15 Thread Ravishankar N



On 11/15/2018 02:11 PM, mabi wrote:


Sure, you will find below the getfattr output of all 3 directories from all 3 
nodes.



Thanks, noted. One more query. Are there files inside each of these 
directories? Or is it just empty directories?



2. Do you know the file (or directory) names corresponding to the other
2 gfids  in heal info output, i.e
gfid:aae4098a-1a71-4155-9cc9-e564b89957cf
gfid:3c92459b-8fa1-4669-9a3d-b38b8d41c360
Please share the getfattr output of them as well.

Unfortunately no. I tried the trick of mounting the volume with the mount option 
"aux-gfid-mount" in order to find the filename corresponding to the GFID and 
then using the following getfattr command:

getfattr -n trusted.glusterfs.pathinfo -e text 
/mnt/g/.gfid/aae4098a-1a71-4155-9cc9-e564b89957cf

this gave me the following output:

trusted.glusterfs.pathinfo="( ( 

 
))"

then if I check the 
".../brick/.glusterfs/aa/e4/aae4098a-1a71-4155-9cc9-e564b89957cf" on node 1 or 
node 3 it does not have any symlink to a file. Or am I looking at the wrong place maybe or 
there is another trick in order to find the GFID->filename?
symlinks are only for dirs. For files, they would be hard links to the 
actual files. So if stat 
../brick/.glusterfs/aa/e4/aae4098a-1a71-4155-9cc9-e564b89957cf gives you 
a file, then you can use find -samefile to get the other hardlinks like so:

#cd /brick/.glusterfs/aa/e4/aae4098a-1a71-4155-9cc9-e564b89957cf
#find /brick -samefile aae4098a-1a71-4155-9cc9-e564b89957cf

If it is a hardlink, then you can do a getfattr on 
/brick/.glusterfs/aa/e4/aae4098a-1a71-4155-9cc9-e564b89957cf itself.

-Ravi

Regards,
Mabi


___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica

2018-11-15 Thread mabi
‐‐‐ Original Message ‐‐‐
On Thursday, November 15, 2018 5:57 AM, Ravishankar N  
wrote:

> 1.Could you provide the getfattr output of the following 3 dirs from all
> 3 nodes?
> i)/data/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10
> ii)/data/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/
> iii)/data/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/oc_dir

Sure, you will find below the getfattr output of all 3 directories from all 3 
nodes.

i)/data/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10

# NODE 1
trusted.afr.dirty=0x
trusted.afr.myvol-pro-client-1=0x
trusted.gfid=0x7d7d2165f4804edf8c93de01c8768269
trusted.glusterfs.dht=0x0001

# NODE 2
trusted.gfid=0x7d7d2165f4804edf8c93de01c8768269
trusted.glusterfs.dht=0x0001

# NODE 3
trusted.afr.dirty=0x
trusted.afr.myvol-pro-client-1=0x
trusted.gfid=0x7d7d2165f4804edf8c93de01c8768269
trusted.glusterfs.dht=0x0001

ii)/data/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/

# NODE 1
trusted.afr.dirty=0x
trusted.afr.myvol-pro-client-1=0x00040003
trusted.gfid=0x70c894ca422b4bceacf15cfb4669abbd
trusted.glusterfs.dht=0x0001

# NODE 2
trusted.gfid=0x10ec1eb1c8544ff2a36c325681713093
trusted.glusterfs.dht=0x0001

# NODE 3
trusted.afr.dirty=0x
trusted.afr.myvol-pro-client-1=0x00040003
trusted.gfid=0x70c894ca422b4bceacf15cfb4669abbd
trusted.glusterfs.dht=0x0001

iii)/data/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/oc_dir

# NODE 1
trusted.afr.dirty=0x
trusted.afr.myvol-pro-client-1=0x00030003
trusted.gfid=0x25e2616b4fb64b2a89451afc956fff19
trusted.glusterfs.dht=0x0001

# NODE 2
trusted.gfid=0xd9ac192ce85e4402af105551f587ed9a
trusted.glusterfs.dht=0x0001

# NODE 3
trusted.afr.dirty=0x
trusted.afr.myvol-pro-client-1=0x00030003
trusted.gfid=0x25e2616b4fb64b2a89451afc956fff19
trusted.glusterfs.dht=0x0001


> 2. Do you know the file (or directory) names corresponding to the other
> 2 gfids  in heal info output, i.e
> gfid:aae4098a-1a71-4155-9cc9-e564b89957cf
> gfid:3c92459b-8fa1-4669-9a3d-b38b8d41c360
> Please share the getfattr output of them as well.

Unfortunately no. I tried the trick of mounting the volume with the mount 
option "aux-gfid-mount" in order to find the filename corresponding to the GFID 
and then using the following getfattr command:

getfattr -n trusted.glusterfs.pathinfo -e text 
/mnt/g/.gfid/aae4098a-1a71-4155-9cc9-e564b89957cf

this gave me the following output:

trusted.glusterfs.pathinfo="( 
( 

 
))"

then if I check the 
".../brick/.glusterfs/aa/e4/aae4098a-1a71-4155-9cc9-e564b89957cf" on node 1 or 
node 3 it does not have any symlink to a file. Or am I looking at the wrong 
place maybe or there is another trick in order to find the GFID->filename?

Regards,
Mabi
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users