On 03/13/2017 03:08 PM, Luca Gervasi wrote:
Hello Ravishankar,
we had to change the directory as we fixed that one, so please check
these links which refers to a new (broken) path:
https://nopaste.me/view/1ee13a63 LS debug log
https://nopaste.me/view/80ac1e13 getfattr
https://nopaste.me/view/eafb0b44 volume status
Thanks, could you check if setting performance.parallel-readdir to 'off'
solves the issue? If yes, do you mind raising a bug and letting us know
the BZ ID?
Please note that parallel-readdir option is still experimental.
-Ravi
Thanks in advance.
Luca & Andrea
On Sat, 11 Mar 2017 at 02:01 Ravishankar N <[email protected]
<mailto:[email protected]>> wrote:
On 03/10/2017 10:32 PM, Luca Gervasi wrote:
Hi,
I'm Andrea's collegue. I'd like to add that we have no
trusted.afr xattr on the root folder
Just to confirm, this would be 'includes2013' right?
where those files are located and every file seems to be clean on
each brick.
You can find another example file's xattr here:
https://nopaste.me/view/3c2014ac
Here a listing: https://nopaste.me/view/eb4430a2
This behavior causes the directory which contains those files
undeletable (we had to clear them up on brick level, clearing all
the hardlinks too).
This issue is visible on fuse mounted volumes while it's not
noticeable when mounted in NFS through ganesha.
Could you provide the complete output of `gluster volume info`? I
want to find out which bricks constitute a replica pair.
Also could you change the diagnostics.client-log-level to DEBUG
temporarily, do an `ls <directory where you see duplicate
entries>` on the fuse mount and share the corresponding mount log?
Thanks,
Ravi
Thanks a lot.
Luca Gervasi
On Fri, 10 Mar 2017 at 17:41 Andrea Fogazzi <[email protected]
<mailto:[email protected]>> wrote:
Hi community,
we ran an extensive issue on our installation of gluster
3.10, which we did upgraded from 3.8.8 (it's
a distribute+replicate, 5 nodes, 3 bricks in replica
2+1 quorum); recently we noticed a frequent issue where files
get duplicated on the some of the directories; this is
visible on the fuse mount points (RW), but not on the
NFS/Ganesha (RO) mount points.
A sample of an ll output:
---------T 1 48 web_rw 0 Mar 10 11:57 paginazione.shtml
-rw-rw-r-- 1 48 web_rw 272 Feb 18 22:00 paginazione.shtml
As you can see, the file is listed twice, but only one of the
two is good (the name is identical, we verified that no
spurious/hidden characters are present in the name); the
issue maybe is related on how we uploaded the files on the
file system, via incremental rsync on the fuse mount.
Do anyone have suggestion on how it can happen, how to solve
existing duplication or how to prevent to happen anymore.
Thanks in advance.
Best regards,
andrea
Options Reconfigured:
performance.cache-invalidation: true
cluster.favorite-child-policy: mtime
features.cache-invalidation: 1
network.inode-lru-limit: 90000
performance.cache-size: 1024MB
storage.linux-aio: on
nfs.outstanding-rpc-limit: 64
storage.build-pgfid: on
cluster.server-quorum-type: server
cluster.self-heal-daemon: enable
performance.nfs.io-cache: on
performance.client-io-threads: on
performance.nfs.stat-prefetch: on
performance.nfs.io-threads: on
diagnostics.latency-measurement: on
diagnostics.count-fop-hits: on
performance.md-cache-timeout: 1
performance.io-thread-count: 16
performance.high-prio-threads: 32
performance.normal-prio-threads: 32
performance.low-prio-threads: 32
performance.least-prio-threads: 1
nfs.acl: off
nfs.rpc-auth-unix: off
diagnostics.client-log-level: ERROR
diagnostics.brick-log-level: ERROR
cluster.lookup-unhashed: auto
performance.nfs.quick-read: on
performance.nfs.read-ahead: on
cluster.quorum-type: auto
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
cluster.lookup-optimize: on
cluster.readdir-optimize: on
performance.read-ahead: off
performance.write-behind-window-size: 1MB
client.event-threads: 4
server.event-threads: 16
cluster.granular-entry-heal: enable
performance.parallel-readdir: on
cluster.server-quorum-ratio: 51
Andrea Fogazzi
_______________________________________________
Gluster-users mailing list
[email protected] <mailto:[email protected]>
http://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
[email protected]
http://lists.gluster.org/mailman/listinfo/gluster-users