Re: [Gluster-users] invisible files in some directory
On Fri, 18 Jan 2019 at 14:25, Mauro Tridici wrote: > Dear Users, > > I’m facing with a new problem on our gluster volume (v. 3.12.14). > Sometime it happen that “ls” command execution, in a specified directory, > return empty output. > “ls” command output is empty, but I know that the involved directory > contains some files and subdirectories. > In fact, if I try to execute “ls” command against a specified file (in the > same folder) I can see that the file is there. > > In a few words: > > “ls" command output executed in a particular folder is empty; > "ls filename” command output executed in the same folder is ok. > > There is something I can do in order to identify the cause of this issue? > > Yes, please take a tcpdump of the client when running the ls on the problematic directory and send it across. tcpdump -i any -s 0 -w /var/tmp/dirls.pcap tcp and not port 22 We have seen such issues when the gfid handle for the directory is missing on the bricks. Regards, Nithya > You can find below some information about the volume. > Thank you in advance, > Mauro Tridici > > [root@s01 ~]# gluster volume info > > > Volume Name: tier2 > Type: Distributed-Disperse > Volume ID: a28d88c5-3295-4e35-98d4-210b3af9358c > Status: Started > Snapshot Count: 0 > Number of Bricks: 12 x (4 + 2) = 72 > Transport-type: tcp > Bricks: > Brick1: s01-stg:/gluster/mnt1/brick > Brick2: s02-stg:/gluster/mnt1/brick > Brick3: s03-stg:/gluster/mnt1/brick > Brick4: s01-stg:/gluster/mnt2/brick > Brick5: s02-stg:/gluster/mnt2/brick > Brick6: s03-stg:/gluster/mnt2/brick > Brick7: s01-stg:/gluster/mnt3/brick > Brick8: s02-stg:/gluster/mnt3/brick > Brick9: s03-stg:/gluster/mnt3/brick > Brick10: s01-stg:/gluster/mnt4/brick > Brick11: s02-stg:/gluster/mnt4/brick > Brick12: s03-stg:/gluster/mnt4/brick > Brick13: s01-stg:/gluster/mnt5/brick > Brick14: s02-stg:/gluster/mnt5/brick > Brick15: s03-stg:/gluster/mnt5/brick > Brick16: s01-stg:/gluster/mnt6/brick > Brick17: s02-stg:/gluster/mnt6/brick > Brick18: s03-stg:/gluster/mnt6/brick > Brick19: s01-stg:/gluster/mnt7/brick > Brick20: s02-stg:/gluster/mnt7/brick > Brick21: s03-stg:/gluster/mnt7/brick > Brick22: s01-stg:/gluster/mnt8/brick > Brick23: s02-stg:/gluster/mnt8/brick > Brick24: s03-stg:/gluster/mnt8/brick > Brick25: s01-stg:/gluster/mnt9/brick > Brick26: s02-stg:/gluster/mnt9/brick > Brick27: s03-stg:/gluster/mnt9/brick > Brick28: s01-stg:/gluster/mnt10/brick > Brick29: s02-stg:/gluster/mnt10/brick > Brick30: s03-stg:/gluster/mnt10/brick > Brick31: s01-stg:/gluster/mnt11/brick > Brick32: s02-stg:/gluster/mnt11/brick > Brick33: s03-stg:/gluster/mnt11/brick > Brick34: s01-stg:/gluster/mnt12/brick > Brick35: s02-stg:/gluster/mnt12/brick > Brick36: s03-stg:/gluster/mnt12/brick > Brick37: s04-stg:/gluster/mnt1/brick > Brick38: s05-stg:/gluster/mnt1/brick > Brick39: s06-stg:/gluster/mnt1/brick > Brick40: s04-stg:/gluster/mnt2/brick > Brick41: s05-stg:/gluster/mnt2/brick > Brick42: s06-stg:/gluster/mnt2/brick > Brick43: s04-stg:/gluster/mnt3/brick > Brick44: s05-stg:/gluster/mnt3/brick > Brick45: s06-stg:/gluster/mnt3/brick > Brick46: s04-stg:/gluster/mnt4/brick > Brick47: s05-stg:/gluster/mnt4/brick > Brick48: s06-stg:/gluster/mnt4/brick > Brick49: s04-stg:/gluster/mnt5/brick > Brick50: s05-stg:/gluster/mnt5/brick > Brick51: s06-stg:/gluster/mnt5/brick > Brick52: s04-stg:/gluster/mnt6/brick > Brick53: s05-stg:/gluster/mnt6/brick > Brick54: s06-stg:/gluster/mnt6/brick > Brick55: s04-stg:/gluster/mnt7/brick > Brick56: s05-stg:/gluster/mnt7/brick > Brick57: s06-stg:/gluster/mnt7/brick > Brick58: s04-stg:/gluster/mnt8/brick > Brick59: s05-stg:/gluster/mnt8/brick > Brick60: s06-stg:/gluster/mnt8/brick > Brick61: s04-stg:/gluster/mnt9/brick > Brick62: s05-stg:/gluster/mnt9/brick > Brick63: s06-stg:/gluster/mnt9/brick > Brick64: s04-stg:/gluster/mnt10/brick > Brick65: s05-stg:/gluster/mnt10/brick > Brick66: s06-stg:/gluster/mnt10/brick > Brick67: s04-stg:/gluster/mnt11/brick > Brick68: s05-stg:/gluster/mnt11/brick > Brick69: s06-stg:/gluster/mnt11/brick > Brick70: s04-stg:/gluster/mnt12/brick > Brick71: s05-stg:/gluster/mnt12/brick > Brick72: s06-stg:/gluster/mnt12/brick > Options Reconfigured: > disperse.eager-lock: off > diagnostics.count-fop-hits: on > diagnostics.latency-measurement: on > cluster.server-quorum-type: server > features.default-soft-limit: 90 > features.quota-deem-statfs: on > performance.io-thread-count: 16 > disperse.cpu-extensions: auto > performance.io-cache: off > network.inode-lru-limit: 5 > performance.md-cache-timeout: 600 > performance.cache-invalidation: on > performance.stat-prefetch: on > features.cache-invalidation-timeout: 600 > features.cache-invalidation: on > cluster.readdir-optimize: on > performance.parallel-readdir: off > performance.readdir-ahead: on > cluster.lookup-optimize: on > client.event-threads: 4 > server.event-threads: 4 > nfs.disable: on > transport.address-family: inet > cluster.quorum-type: none > cluster.min-free-disk: 10 >
[Gluster-users] invisible files in some directory
Dear Users, I’m facing with a new problem on our gluster volume (v. 3.12.14). Sometime it happen that “ls” command execution, in a specified directory, return empty output. “ls” command output is empty, but I know that the involved directory contains some files and subdirectories. In fact, if I try to execute “ls” command against a specified file (in the same folder) I can see that the file is there. In a few words: “ls" command output executed in a particular folder is empty; "ls filename” command output executed in the same folder is ok. There is something I can do in order to identify the cause of this issue? You can find below some information about the volume. Thank you in advance, Mauro Tridici [root@s01 ~]# gluster volume info Volume Name: tier2 Type: Distributed-Disperse Volume ID: a28d88c5-3295-4e35-98d4-210b3af9358c Status: Started Snapshot Count: 0 Number of Bricks: 12 x (4 + 2) = 72 Transport-type: tcp Bricks: Brick1: s01-stg:/gluster/mnt1/brick Brick2: s02-stg:/gluster/mnt1/brick Brick3: s03-stg:/gluster/mnt1/brick Brick4: s01-stg:/gluster/mnt2/brick Brick5: s02-stg:/gluster/mnt2/brick Brick6: s03-stg:/gluster/mnt2/brick Brick7: s01-stg:/gluster/mnt3/brick Brick8: s02-stg:/gluster/mnt3/brick Brick9: s03-stg:/gluster/mnt3/brick Brick10: s01-stg:/gluster/mnt4/brick Brick11: s02-stg:/gluster/mnt4/brick Brick12: s03-stg:/gluster/mnt4/brick Brick13: s01-stg:/gluster/mnt5/brick Brick14: s02-stg:/gluster/mnt5/brick Brick15: s03-stg:/gluster/mnt5/brick Brick16: s01-stg:/gluster/mnt6/brick Brick17: s02-stg:/gluster/mnt6/brick Brick18: s03-stg:/gluster/mnt6/brick Brick19: s01-stg:/gluster/mnt7/brick Brick20: s02-stg:/gluster/mnt7/brick Brick21: s03-stg:/gluster/mnt7/brick Brick22: s01-stg:/gluster/mnt8/brick Brick23: s02-stg:/gluster/mnt8/brick Brick24: s03-stg:/gluster/mnt8/brick Brick25: s01-stg:/gluster/mnt9/brick Brick26: s02-stg:/gluster/mnt9/brick Brick27: s03-stg:/gluster/mnt9/brick Brick28: s01-stg:/gluster/mnt10/brick Brick29: s02-stg:/gluster/mnt10/brick Brick30: s03-stg:/gluster/mnt10/brick Brick31: s01-stg:/gluster/mnt11/brick Brick32: s02-stg:/gluster/mnt11/brick Brick33: s03-stg:/gluster/mnt11/brick Brick34: s01-stg:/gluster/mnt12/brick Brick35: s02-stg:/gluster/mnt12/brick Brick36: s03-stg:/gluster/mnt12/brick Brick37: s04-stg:/gluster/mnt1/brick Brick38: s05-stg:/gluster/mnt1/brick Brick39: s06-stg:/gluster/mnt1/brick Brick40: s04-stg:/gluster/mnt2/brick Brick41: s05-stg:/gluster/mnt2/brick Brick42: s06-stg:/gluster/mnt2/brick Brick43: s04-stg:/gluster/mnt3/brick Brick44: s05-stg:/gluster/mnt3/brick Brick45: s06-stg:/gluster/mnt3/brick Brick46: s04-stg:/gluster/mnt4/brick Brick47: s05-stg:/gluster/mnt4/brick Brick48: s06-stg:/gluster/mnt4/brick Brick49: s04-stg:/gluster/mnt5/brick Brick50: s05-stg:/gluster/mnt5/brick Brick51: s06-stg:/gluster/mnt5/brick Brick52: s04-stg:/gluster/mnt6/brick Brick53: s05-stg:/gluster/mnt6/brick Brick54: s06-stg:/gluster/mnt6/brick Brick55: s04-stg:/gluster/mnt7/brick Brick56: s05-stg:/gluster/mnt7/brick Brick57: s06-stg:/gluster/mnt7/brick Brick58: s04-stg:/gluster/mnt8/brick Brick59: s05-stg:/gluster/mnt8/brick Brick60: s06-stg:/gluster/mnt8/brick Brick61: s04-stg:/gluster/mnt9/brick Brick62: s05-stg:/gluster/mnt9/brick Brick63: s06-stg:/gluster/mnt9/brick Brick64: s04-stg:/gluster/mnt10/brick Brick65: s05-stg:/gluster/mnt10/brick Brick66: s06-stg:/gluster/mnt10/brick Brick67: s04-stg:/gluster/mnt11/brick Brick68: s05-stg:/gluster/mnt11/brick Brick69: s06-stg:/gluster/mnt11/brick Brick70: s04-stg:/gluster/mnt12/brick Brick71: s05-stg:/gluster/mnt12/brick Brick72: s06-stg:/gluster/mnt12/brick Options Reconfigured: disperse.eager-lock: off diagnostics.count-fop-hits: on diagnostics.latency-measurement: on cluster.server-quorum-type: server features.default-soft-limit: 90 features.quota-deem-statfs: on performance.io-thread-count: 16 disperse.cpu-extensions: auto performance.io-cache: off network.inode-lru-limit: 5 performance.md-cache-timeout: 600 performance.cache-invalidation: on performance.stat-prefetch: on features.cache-invalidation-timeout: 600 features.cache-invalidation: on cluster.readdir-optimize: on performance.parallel-readdir: off performance.readdir-ahead: on cluster.lookup-optimize: on client.event-threads: 4 server.event-threads: 4 nfs.disable: on transport.address-family: inet cluster.quorum-type: none cluster.min-free-disk: 10 performance.client-io-threads: on features.quota: on features.inode-quota: on features.bitrot: on features.scrub: Active network.ping-timeout: 0 cluster.brick-multiplex: off cluster.server-quorum-ratio: 51 ___ Gluster-users mailing list Gluster-users@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Invisible files
On Fri, 14 Dec 2018 at 19:10, Raghavendra Gowdappa wrote: > > > On Fri, Dec 14, 2018 at 6:38 PM Lindolfo Meira > wrote: > >> It happened to me using gluster 5.0, on OpenSUSE Leap 15, during a >> benchmark with IOR: the volume would seem normally mounted, but I was >> unable to overwrite files, and ls would show the volume as totally empty. >> I could write new files without any problems. But still ls would not show >> anything. >> >> Does anyone know anything about that? >> > > can you get following information from the problematic directory on > backend bricks? > > * ls -l > * getfattr -e hex -d -m. > We have seen similar behaviour if the gfid handle is missing for the parent directory on the bricks. Once you get the gfid please check if the handle exists on the backend as well. > > >> >> Lindolfo Meira, MSc >> Diretor Geral, Centro Nacional de Supercomputação >> Universidade Federal do Rio Grande do Sul >> +55 (51) 3308-3122___ >> Gluster-users mailing list >> Gluster-users@gluster.org >> https://lists.gluster.org/mailman/listinfo/gluster-users > > ___ > Gluster-users mailing list > Gluster-users@gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Invisible files
Unfortunately I didn't try that. But I will next time it happens. Lindolfo Meira, MSc Diretor Geral, Centro Nacional de Supercomputação Universidade Federal do Rio Grande do Sul +55 (51) 3308-3139 On Fri, 14 Dec 2018, Raghavendra Gowdappa wrote: > On Fri, Dec 14, 2018 at 6:38 PM Lindolfo Meira wrote: > > > It happened to me using gluster 5.0, on OpenSUSE Leap 15, during a > > benchmark with IOR: the volume would seem normally mounted, but I was > > unable to overwrite files, and ls would show the volume as totally empty. > > I could write new files without any problems. But still ls would not show > > anything. > > > > Does anyone know anything about that? > > > > can you get following information from the problematic directory on backend > bricks? > > * ls -l > * getfattr -e hex -d -m. > > > > > > Lindolfo Meira, MSc > > Diretor Geral, Centro Nacional de Supercomputação > > Universidade Federal do Rio Grande do Sul > > +55 (51) 3308-3122___ > > Gluster-users mailing list > > Gluster-users@gluster.org > > https://lists.gluster.org/mailman/listinfo/gluster-users > ___ Gluster-users mailing list Gluster-users@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Invisible files
On Fri, Dec 14, 2018 at 6:38 PM Lindolfo Meira wrote: > It happened to me using gluster 5.0, on OpenSUSE Leap 15, during a > benchmark with IOR: the volume would seem normally mounted, but I was > unable to overwrite files, and ls would show the volume as totally empty. > I could write new files without any problems. But still ls would not show > anything. > > Does anyone know anything about that? > can you get following information from the problematic directory on backend bricks? * ls -l * getfattr -e hex -d -m. > > Lindolfo Meira, MSc > Diretor Geral, Centro Nacional de Supercomputação > Universidade Federal do Rio Grande do Sul > +55 (51) 3308-3122___ > Gluster-users mailing list > Gluster-users@gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Invisible files and directories
This sounds like it may be a different issue. Can you file a bug for this ([1]) and provide all the logs/information you have on this (dir name, files on bricks, mount logs etc)? Thanks, Nithya [1] https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS On 4 April 2018 at 19:03, Gudrun Mareike Amedickwrote: > Hi, > > I'm currently facing the same behaviour. > > Today, one of my users tried to delete a folder. It failed, saying the > directory wasn't empty. ls -lah showed an empty folder but on the bricks I > found some files. Renaming the directory caused it to reappear. > > We're running gluster 3.12.7-1 on Debian 9 from the repositories provided > by gluster.org, upgraded from 3.8 a while ago. The volume is mounted via > the > fuse client.Our settings are: > > gluster volume info $VOLUMENAME > > > > Volume Name: $VOLUMENAME > > Type: Distribute > > Volume ID: 0d210c70-e44f-46f1-862c-ef260514c9f1 > > Status: Started > > Snapshot Count: 0 > > Number of Bricks: 23 > > Transport-type: tcp > > Bricks: > > Brick1: gluster02:/srv/glusterfs/bricks/DATA201/data > > Brick2: gluster02:/srv/glusterfs/bricks/DATA202/data > > Brick3: gluster02:/srv/glusterfs/bricks/DATA203/data > > Brick4: gluster02:/srv/glusterfs/bricks/DATA204/data > > Brick5: gluster02:/srv/glusterfs/bricks/DATA205/data > > Brick6: gluster02:/srv/glusterfs/bricks/DATA206/data > > Brick7: gluster02:/srv/glusterfs/bricks/DATA207/data > > Brick8: gluster02:/srv/glusterfs/bricks/DATA208/data > > Brick9: gluster01:/srv/glusterfs/bricks/DATA110/data > > Brick10: gluster01:/srv/glusterfs/bricks/DATA111/data > > Brick11: gluster01:/srv/glusterfs/bricks/DATA112/data > > Brick12: gluster01:/srv/glusterfs/bricks/DATA113/data > > Brick13: gluster01:/srv/glusterfs/bricks/DATA114/data > > Brick14: gluster02:/srv/glusterfs/bricks/DATA209/data > > Brick15: gluster01:/srv/glusterfs/bricks/DATA101/data > > Brick16: gluster01:/srv/glusterfs/bricks/DATA102/data > > Brick17: gluster01:/srv/glusterfs/bricks/DATA103/data > > Brick18: gluster01:/srv/glusterfs/bricks/DATA104/data > > Brick19: gluster01:/srv/glusterfs/bricks/DATA105/data > > Brick20: gluster01:/srv/glusterfs/bricks/DATA106/data > > Brick21: gluster01:/srv/glusterfs/bricks/DATA107/data > > Brick22: gluster01:/srv/glusterfs/bricks/DATA108/data > > Brick23: gluster01:/srv/glusterfs/bricks/DATA109/data > > Options Reconfigured: > > nfs.addr-namelookup: off > > transport.address-family: inet > > nfs.disable: on > > diagnostics.brick-log-level: ERROR > > performance.readdir-ahead: on > > auth.allow: $IP RANGE > > features.quota: on > > features.inode-quota: on > > features.quota-deem-statfs: on > > We had a scheduled reboot yesterday. > > Kind regards > > Gudrun Amedick > > > Am Mittwoch, den 04.04.2018, 01:33 -0400 schrieb Serg Gulko: > > Right now the volume is running with > > > > readdir-optimize off > > parallel-readdir off > > > > On Wed, Apr 4, 2018 at 1:29 AM, Nithya Balachandran > wrote: > > > Hi Serg, > > > > > > Do you mean that turning off readdir-optimize did not work? Or did you > mean turning off parallel-readdir did not work? > > > > > > > > > > > > On 4 April 2018 at 10:48, Serg Gulko wrote: > > > > Hello! > > > > > > > > Unfortunately no. > > > > Directory still not listed using ls -la, but I can cd into. > > > > I can rename it and it becomes available when I rename it back to > the original name it's disappeared again. > > > > > > > > On Wed, Apr 4, 2018 at 12:56 AM, Raghavendra Gowdappa < > rgowd...@redhat.com> wrote: > > > > > > > > > > > > > > > On Wed, Apr 4, 2018 at 4:13 AM, Serg Gulko > wrote: > > > > > > Hello! > > > > > > > > > > > > We are running distributed volume that contains 7 bricks. > > > > > > Volume is mounted using native fuse client. > > > > > > > > > > > > After an unexpected system reboot, some files are disappeared > from fuse mount point but still available on the bricks. > > > > > > > > > > > > The way it disappeared confusing me a lot. I can't see certain > directories using ls -la but, at the same time, can cd into the missed > > > > > > directory. I can rename the invisible directory and it becomes > accessible. When I renamed it back to the original name, it becomes > > > > > > invisible. > > > > > > > > > > > > I also tried to mount the same volume into another location and > run ls hoping that selfheal will fix the problem. Unfortunately, it did > > > > > > not. > > > > > > > > > > > > Is there a way to bring our storage to normal? > > > > > > > > > > > Can you check whether turning off option performance.readdir-ahead > helps? > > > > > > > > > > > > > > > > > glusterfs 3.8.8 built on Jan 11 2017 16:33:17 > > > > > > > > > > > > Serg Gulko > > > > > > > > > > > > ___ > > > > > > Gluster-users mailing list > > > > > > Gluster-users@gluster.org > > > > > > http://lists.gluster.org/mailman/listinfo/gluster-users > > > > > >
Re: [Gluster-users] Invisible files and directories
Hi, I'm currently facing the same behaviour. Today, one of my users tried to delete a folder. It failed, saying the directory wasn't empty. ls -lah showed an empty folder but on the bricks I found some files. Renaming the directory caused it to reappear. We're running gluster 3.12.7-1 on Debian 9 from the repositories provided by gluster.org, upgraded from 3.8 a while ago. The volume is mounted via the fuse client.Our settings are: > gluster volume info $VOLUMENAME > > Volume Name: $VOLUMENAME > Type: Distribute > Volume ID: 0d210c70-e44f-46f1-862c-ef260514c9f1 > Status: Started > Snapshot Count: 0 > Number of Bricks: 23 > Transport-type: tcp > Bricks: > Brick1: gluster02:/srv/glusterfs/bricks/DATA201/data > Brick2: gluster02:/srv/glusterfs/bricks/DATA202/data > Brick3: gluster02:/srv/glusterfs/bricks/DATA203/data > Brick4: gluster02:/srv/glusterfs/bricks/DATA204/data > Brick5: gluster02:/srv/glusterfs/bricks/DATA205/data > Brick6: gluster02:/srv/glusterfs/bricks/DATA206/data > Brick7: gluster02:/srv/glusterfs/bricks/DATA207/data > Brick8: gluster02:/srv/glusterfs/bricks/DATA208/data > Brick9: gluster01:/srv/glusterfs/bricks/DATA110/data > Brick10: gluster01:/srv/glusterfs/bricks/DATA111/data > Brick11: gluster01:/srv/glusterfs/bricks/DATA112/data > Brick12: gluster01:/srv/glusterfs/bricks/DATA113/data > Brick13: gluster01:/srv/glusterfs/bricks/DATA114/data > Brick14: gluster02:/srv/glusterfs/bricks/DATA209/data > Brick15: gluster01:/srv/glusterfs/bricks/DATA101/data > Brick16: gluster01:/srv/glusterfs/bricks/DATA102/data > Brick17: gluster01:/srv/glusterfs/bricks/DATA103/data > Brick18: gluster01:/srv/glusterfs/bricks/DATA104/data > Brick19: gluster01:/srv/glusterfs/bricks/DATA105/data > Brick20: gluster01:/srv/glusterfs/bricks/DATA106/data > Brick21: gluster01:/srv/glusterfs/bricks/DATA107/data > Brick22: gluster01:/srv/glusterfs/bricks/DATA108/data > Brick23: gluster01:/srv/glusterfs/bricks/DATA109/data > Options Reconfigured: > nfs.addr-namelookup: off > transport.address-family: inet > nfs.disable: on > diagnostics.brick-log-level: ERROR > performance.readdir-ahead: on > auth.allow: $IP RANGE > features.quota: on > features.inode-quota: on > features.quota-deem-statfs: on We had a scheduled reboot yesterday. Kind regards Gudrun Amedick Am Mittwoch, den 04.04.2018, 01:33 -0400 schrieb Serg Gulko: > Right now the volume is running with > > readdir-optimize off > parallel-readdir off > > On Wed, Apr 4, 2018 at 1:29 AM, Nithya Balachandran> wrote: > > Hi Serg, > > > > Do you mean that turning off readdir-optimize did not work? Or did you mean > > turning off parallel-readdir did not work? > > > > > > > > On 4 April 2018 at 10:48, Serg Gulko wrote: > > > Hello! > > > > > > Unfortunately no. > > > Directory still not listed using ls -la, but I can cd into. > > > I can rename it and it becomes available when I rename it back to the > > > original name it's disappeared again. > > > > > > On Wed, Apr 4, 2018 at 12:56 AM, Raghavendra Gowdappa > > > wrote: > > > > > > > > > > > > On Wed, Apr 4, 2018 at 4:13 AM, Serg Gulko wrote: > > > > > Hello! > > > > > > > > > > We are running distributed volume that contains 7 bricks. > > > > > Volume is mounted using native fuse client. > > > > > > > > > > After an unexpected system reboot, some files are disappeared from > > > > > fuse mount point but still available on the bricks. > > > > > > > > > > The way it disappeared confusing me a lot. I can't see certain > > > > > directories using ls -la but, at the same time, can cd into the missed > > > > > directory. I can rename the invisible directory and it becomes > > > > > accessible. When I renamed it back to the original name, it becomes > > > > > invisible. > > > > > > > > > > I also tried to mount the same volume into another location and run > > > > > ls hoping that selfheal will fix the problem. Unfortunately, it did > > > > > not. > > > > > > > > > > Is there a way to bring our storage to normal? > > > > > > > > > Can you check whether turning off option performance.readdir-ahead > > > > helps? > > > > > > > > > > > > > > glusterfs 3.8.8 built on Jan 11 2017 16:33:17 > > > > > > > > > > Serg Gulko > > > > > > > > > > ___ > > > > > Gluster-users mailing list > > > > > Gluster-users@gluster.org > > > > > http://lists.gluster.org/mailman/listinfo/gluster-users > > > > > > > > > > > > > > > ___ > > > Gluster-users mailing list > > > Gluster-users@gluster.org > > > http://lists.gluster.org/mailman/listinfo/gluster-users > > > ___ > Gluster-users mailing list > Gluster-users@gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users smime.p7s Description: S/MIME cryptographic signature ___
Re: [Gluster-users] Invisible files and directories
Right now the volume is running with readdir-optimize off parallel-readdir off On Wed, Apr 4, 2018 at 1:29 AM, Nithya Balachandranwrote: > Hi Serg, > > Do you mean that turning off readdir-optimize did not work? Or did you > mean turning off parallel-readdir did not work? > > > > On 4 April 2018 at 10:48, Serg Gulko wrote: > >> Hello! >> >> Unfortunately no. >> Directory still not listed using ls -la, but I can cd into. >> I can rename it and it becomes available when I rename it back to the >> original name it's disappeared again. >> >> On Wed, Apr 4, 2018 at 12:56 AM, Raghavendra Gowdappa < >> rgowd...@redhat.com> wrote: >> >>> >>> >>> On Wed, Apr 4, 2018 at 4:13 AM, Serg Gulko wrote: >>> Hello! We are running distributed volume that contains 7 bricks. Volume is mounted using native fuse client. After an unexpected system reboot, some files are disappeared from fuse mount point but still available on the bricks. The way it disappeared confusing me a lot. I can't see certain directories using ls -la but, at the same time, can cd into the missed directory. I can rename the invisible directory and it becomes accessible. When I renamed it back to the original name, it becomes invisible. I also tried to mount the same volume into another location and run ls hoping that selfheal will fix the problem. Unfortunately, it did not. Is there a way to bring our storage to normal? >>> >>> Can you check whether turning off option performance.readdir-ahead helps? >>> >>> glusterfs 3.8.8 built on Jan 11 2017 16:33:17 Serg Gulko ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users >>> >>> >> >> ___ >> Gluster-users mailing list >> Gluster-users@gluster.org >> http://lists.gluster.org/mailman/listinfo/gluster-users >> > > ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Invisible files and directories
Hi Serg, Do you mean that turning off readdir-optimize did not work? Or did you mean turning off parallel-readdir did not work? On 4 April 2018 at 10:48, Serg Gulkowrote: > Hello! > > Unfortunately no. > Directory still not listed using ls -la, but I can cd into. > I can rename it and it becomes available when I rename it back to the > original name it's disappeared again. > > On Wed, Apr 4, 2018 at 12:56 AM, Raghavendra Gowdappa > wrote: > >> >> >> On Wed, Apr 4, 2018 at 4:13 AM, Serg Gulko wrote: >> >>> Hello! >>> >>> We are running distributed volume that contains 7 bricks. >>> Volume is mounted using native fuse client. >>> >>> After an unexpected system reboot, some files are disappeared from fuse >>> mount point but still available on the bricks. >>> >>> The way it disappeared confusing me a lot. I can't see certain >>> directories using ls -la but, at the same time, can cd into the missed >>> directory. I can rename the invisible directory and it becomes accessible. >>> When I renamed it back to the original name, it becomes invisible. >>> >>> I also tried to mount the same volume into another location and run ls >>> hoping that selfheal will fix the problem. Unfortunately, it did not. >>> >>> Is there a way to bring our storage to normal? >>> >> >> Can you check whether turning off option performance.readdir-ahead helps? >> >> >>> glusterfs 3.8.8 built on Jan 11 2017 16:33:17 >>> >>> Serg Gulko >>> >>> ___ >>> Gluster-users mailing list >>> Gluster-users@gluster.org >>> http://lists.gluster.org/mailman/listinfo/gluster-users >>> >> >> > > ___ > Gluster-users mailing list > Gluster-users@gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users > ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Invisible files and directories
Hello! Unfortunately no. Directory still not listed using ls -la, but I can cd into. I can rename it and it becomes available when I rename it back to the original name it's disappeared again. On Wed, Apr 4, 2018 at 12:56 AM, Raghavendra Gowdappawrote: > > > On Wed, Apr 4, 2018 at 4:13 AM, Serg Gulko wrote: > >> Hello! >> >> We are running distributed volume that contains 7 bricks. >> Volume is mounted using native fuse client. >> >> After an unexpected system reboot, some files are disappeared from fuse >> mount point but still available on the bricks. >> >> The way it disappeared confusing me a lot. I can't see certain >> directories using ls -la but, at the same time, can cd into the missed >> directory. I can rename the invisible directory and it becomes accessible. >> When I renamed it back to the original name, it becomes invisible. >> >> I also tried to mount the same volume into another location and run ls >> hoping that selfheal will fix the problem. Unfortunately, it did not. >> >> Is there a way to bring our storage to normal? >> > > Can you check whether turning off option performance.readdir-ahead helps? > > >> glusterfs 3.8.8 built on Jan 11 2017 16:33:17 >> >> Serg Gulko >> >> ___ >> Gluster-users mailing list >> Gluster-users@gluster.org >> http://lists.gluster.org/mailman/listinfo/gluster-users >> > > ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Invisible files and directories
On Wed, Apr 4, 2018 at 4:13 AM, Serg Gulkowrote: > Hello! > > We are running distributed volume that contains 7 bricks. > Volume is mounted using native fuse client. > > After an unexpected system reboot, some files are disappeared from fuse > mount point but still available on the bricks. > > The way it disappeared confusing me a lot. I can't see certain directories > using ls -la but, at the same time, can cd into the missed directory. I > can rename the invisible directory and it becomes accessible. When I > renamed it back to the original name, it becomes invisible. > > I also tried to mount the same volume into another location and run ls > hoping that selfheal will fix the problem. Unfortunately, it did not. > > Is there a way to bring our storage to normal? > Can you check whether turning off option performance.readdir-ahead helps? > glusterfs 3.8.8 built on Jan 11 2017 16:33:17 > > Serg Gulko > > ___ > Gluster-users mailing list > Gluster-users@gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users > ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Invisible files and directories
Thanks for the reply, Vlad! Now is the trickiest part: gluster volume get volume_1 cluster.readdir-optimize Option Value -- - cluster.readdir-optimizeoff Serg On Tue, Apr 3, 2018 at 9:20 PM, Vlad Kopylovwrote: > http://lists.gluster.org/pipermail/gluster-users/2018-April/033811.html > > On Tue, Apr 3, 2018 at 6:43 PM, Serg Gulko wrote: > >> Hello! >> >> We are running distributed volume that contains 7 bricks. >> Volume is mounted using native fuse client. >> >> After an unexpected system reboot, some files are disappeared from fuse >> mount point but still available on the bricks. >> >> The way it disappeared confusing me a lot. I can't see certain >> directories using ls -la but, at the same time, can cd into the missed >> directory. I can rename the invisible directory and it becomes accessible. >> When I renamed it back to the original name, it becomes invisible. >> >> I also tried to mount the same volume into another location and run ls >> hoping that selfheal will fix the problem. Unfortunately, it did not. >> >> Is there a way to bring our storage to normal? >> >> glusterfs 3.8.8 built on Jan 11 2017 16:33:17 >> >> Serg Gulko >> >> ___ >> Gluster-users mailing list >> Gluster-users@gluster.org >> http://lists.gluster.org/mailman/listinfo/gluster-users >> > > ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Invisible files and directories
http://lists.gluster.org/pipermail/gluster-users/2018-April/033811.html On Tue, Apr 3, 2018 at 6:43 PM, Serg Gulkowrote: > Hello! > > We are running distributed volume that contains 7 bricks. > Volume is mounted using native fuse client. > > After an unexpected system reboot, some files are disappeared from fuse > mount point but still available on the bricks. > > The way it disappeared confusing me a lot. I can't see certain directories > using ls -la but, at the same time, can cd into the missed directory. I > can rename the invisible directory and it becomes accessible. When I > renamed it back to the original name, it becomes invisible. > > I also tried to mount the same volume into another location and run ls > hoping that selfheal will fix the problem. Unfortunately, it did not. > > Is there a way to bring our storage to normal? > > glusterfs 3.8.8 built on Jan 11 2017 16:33:17 > > Serg Gulko > > ___ > Gluster-users mailing list > Gluster-users@gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users > ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] Invisible files and directories
Hello! We are running distributed volume that contains 7 bricks. Volume is mounted using native fuse client. After an unexpected system reboot, some files are disappeared from fuse mount point but still available on the bricks. The way it disappeared confusing me a lot. I can't see certain directories using ls -la but, at the same time, can cd into the missed directory. I can rename the invisible directory and it becomes accessible. When I renamed it back to the original name, it becomes invisible. I also tried to mount the same volume into another location and run ls hoping that selfheal will fix the problem. Unfortunately, it did not. Is there a way to bring our storage to normal? glusterfs 3.8.8 built on Jan 11 2017 16:33:17 Serg Gulko ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] Invisible Files?
Hello List, I am a relative newbie to gluster so I wonder if someone could help me to troubleshoot a problem. I appear to have invisible files. The following illustrates:- ls -la total 92 drwxr-xr-x. 7 root root 8192 Dec 3 08:53 . drwxr-xr-x. 4 root root 8192 Nov 28 12:12 .. drwx--. 12 root root 8192 Nov 28 12:20 def.com drwx--. 12 root root 8192 Nov 28 12:19 ghi.com drwx--. 12 root root 8192 Nov 28 12:19 jkl.com drwx--. 12 root root 8192 Nov 28 12:20 xyz.com [root@ww04h06 domains]# mkdir abc.com mkdir: cannot create directory `abc.com': File exists [root@ww04h06 domains]# The directory abc.com does not appear in the ls but I cant create it with the second. This folder is a glusterfs mount from a distributed pair of gluster bricks. Where should I start looking? Thanks! Alan ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Invisible Files?
On 12/03/2013 02:36 PM, Alan Simpson wrote: Hello List, I am a relative newbie to gluster so I wonder if someone could help me to troubleshoot a problem. I appear to have invisible files. The following illustrates:- ls -la total 92 drwxr-xr-x. 7 root root 8192 Dec 3 08:53 . drwxr-xr-x. 4 root root 8192 Nov 28 12:12 .. drwx--. 12 root root 8192 Nov 28 12:20 def.com drwx--. 12 root root 8192 Nov 28 12:19 ghi.com drwx--. 12 root root 8192 Nov 28 12:19 jkl.com drwx--. 12 root root 8192 Nov 28 12:20 xyz.com [root@ww04h06 domains]# mkdir abc.com mkdir: cannot create directory `abc.com': File exists [root@ww04h06 domains]# The directory abc.com does not appear in the ls but I cant create it with the second. This folder is a glusterfs mount from a distributed pair of gluster bricks. Where should I start looking? The brick log files might be a good starting place to determine which brick failed the operation. After that, you can check the corresponding brick to determine if abc.com exists there.mkdir operation is by default dispatched to all bricks in a distributed volume and a failure on any one of them can cause the mkdir operation on the mount point to fail. -Vijay ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users