Hi all,
we set the followig options today:
performance.read-ahead=on
performance.write-behind-window-size=4MB
performance.cache-max-file-size=10
performance.write-behind=off
performance.cache-invalidation=on
server.event-threads=4
client.event-threads=4
performance.parallel-readdir=on
Hi Martin,
my volume is a full replica
I obtain messages like this in /var/log/glusterfs/bricks
gluster--brick.log:[2020-11-18 13:57:41.434070] I [MSGID:
115071] [server-rpc-fops_v2.c:1492:server4_create_cbk] 0-gv-ho-server:
CREATE info [{frame=194885}, {path=/user/Documents/Hb
ciLog.txt},
Hi Benedikt,
You are right , disabling performance.readdir-ahead didn't solve the issue
for me.
It took a little longer to find out, and I wasn't sure if the errors were
already there before turning off the setting.
Is your volume full replica or are you using an arbiter?
On Wed, Nov 18, 2020
Dear Martin,
Do you have any new observations regarding this issue?
I just found your thread. This error of missing files on a fuse mounts
is appearing on my setup with 3 replicated bricks on gluster 8.2. too.
I set performance.readdir-ahead: off but the error still occurs quite
frequently.
Thanks Mahdi, I'll try that option, I hope it doesn't come with a big
performance penalty.
Recently upgraded to 7.8 by Strahil's advice, but before that, I had the
feeling that restarting the brick processes in one node in particular (the
one with the most user connections) helped a lot.
I've
Hello Martín,
Try to disable "performance.readdir-ahead", we had a similar issue, and
disabling "performance.readdir-ahead" solved our issue.
gluster volume set tapeless performance.readdir-ahead off
On Tue, Oct 27, 2020 at 8:23 PM Martín Lorenzo wrote:
> Hi Strahil, today we have the same
Yes,
common sense leads that any issues should be observed on nodes that did not do
the operation.
As you see the issue constantly on a single client - maybe you can reinstall
the packages there and reconnect.
Also, consider updating to latest 7.X version as soon as possible and then the
Have you tried to reduce the cache timeouts ?
I can't find your gluster version in the thread - can you share again OS +
gluster version ?
Best Regards,
Strahil Nikolov
В вторник, 27 октомври 2020 г., 19:23:28 Гринуич+2, Martín Lorenzo
написа:
Hi Strahil, today we have the same
Hi Strahil
The versions are:
CentOS Linux release 7.7.1908
glusterfs 7.3
I am setting performance.md-cache-timeout and performance.nl-cache-timeout
to 120s
The weird thing about it, It always happens on the same mount as the
operation (copy, mv). My common sense is that any cache related problem
Hi Strahil, today we have the same number clients on all nodes, but the
problem persists. I have the impression that it gets more frequent as the
server capacity fills up, now we are having at least one incident per day.
Regards,
Martin
On Mon, Oct 26, 2020 at 8:09 AM Martín Lorenzo wrote:
> HI
HI Strahil, thanks for your reply,
I had one node with 13 clients, the rest with 14. I've just restarted the
services on that node, now I have 14, let's see what happens.
Regarding the samba repos, I wasn't aware of that, I was using centos main
repo. I'll check the out
Best Regards,
Martin
On
Hi Martin,
why should you use samba 4.10.5 ? What is your OS version ?
Recently AnoopCS has provided packages for samba 4.12 .
Best Regards,
Strahil Nikolov
В петък, 23 октомври 2020 г., 18:50:35 Гринуич+3, Martín Lorenzo
написа:
Hi Eli, remounting the volume fixes it.
So,
Hi Eli, remounting the volume fixes it.
So, regarding cache invalidation, which volume options should I modify in
order to minimize it?
I cannot use gluster vfs on samba since it is broken on 4.10.5
https://lists.samba.org/archive/samba/2019-June/223683.html
Also, is it correlated to system
On Tue, Oct 20, 2020 at 8:41 AM Martín Lorenzo wrote:
>
> Hi, I have the following problem, I have a distributed replicated cluster set
> up with samba and CTDB, over fuse mount points
> I am having inconsistencies across the FUSE mounts, users report that files
> are disappearing after being
Do you have the same ammount of clients connected to each brick ?
I guess something like this can show it:
gluster volume status VOL clients
gluster volume status VOL client-list
Best Regards,
Strahil Nikolov
В вторник, 20 октомври 2020 г., 15:41:45 Гринуич+3, Martín Lorenzo
написа:
Hi, I have the following problem, I have a distributed replicated cluster
set up with samba and CTDB, over fuse mount points
I am having inconsistencies across the FUSE mounts, users report that files
are disappearing after being copied/moved. I take a look at the mount
points on each node, and
16 matches
Mail list logo