Thank you! I am very interested. I hadn't considered the automounter
idea.
Also, your fstab has a different dependency approach than mine otherwise
as well.
If you happen to have the examples handy, I'll give them a shot here.
I'm looking forward to emerging from this dark place of dependencies
Dear Gluster Community,
I also have a issue concerning performance. The last days I updated our
test cluster from GlusterFS v5.5 to v7.0 . The setup in general:
2 HP DL380 Servers with 10Gbit NICs, 1 Distribute-Replica 2 Volume with 2
Replica Pairs. Client is SMB Samba (access via vfs_glusterfs)
Sure,
Here is what was the setup :
[root@ovirt1 ~]# systemctl cat var-run-gluster-shared_storage.mount --no-pager
# /run/systemd/generator/var-run-gluster-shared_storage.mount
# Automatically generated by systemd-fstab-generator
[Unit]
SourcePath=/etc/fstab
Documentation=man:fstab(5)
Hi Niels,
It seems that 5 days later, the v6.6 is still missing.
Do you have any contacts in CentOS mailing lists that I can ask to check
what''s going on ?
Best Regards,
Strahil NikolovOn Oct 31, 2019 10:39, Niels de Vos wrote:
>
> On Thu, Oct 31, 2019 at 07:39:56AM +, Strahil Nikolov
I am trying to mount glusterfs on a kubernetes pod using a fuse mount.
Kubernetes hardcodes the backup-volfile-servers option that is giving an
error while mounting. I can mount without this option. IS there a way to
make the client happy by setting something on the server?
*Mounting arguments:
Hello Amar,
> Can you please check the profile info [1] ? That may give some hints.
I am attaching the output of `sudo gluster volume profile info` as a text file
to preserve formatting. This covers the time from Friday night to
Monday morning;
during this time the cluster has been the target
Hello Strahil,
> You can set your mounts with 'noatime,nodiratime' options for better
> performance.
Thanks for the suggestion! I'll try that eventually, but I don't
think `noatime` will make much difference on write-mostly workload.
Thanks,
R
Community Meeting Calendar:
APAC