Dear Ravi,
Thank you for your answer. I will start first by sending you below the getfattr
from the first entry which does not get healed (it is in fact a directory). It
is the following path/dir from the output of one of my previous mails:
Can you share the getfattr output of all 4 entries from all 3 bricks?
Also, can you tailf glustershd.log on all nodes and see if anything is
logged for these entries when you run 'gluster volume heal $volname'?
Regards,
Ravi
On 11/07/2018 01:22 PM, mabi wrote:
To my eyes this specific
Hi,
I have just updated gluster and I am now taking a look at the logs and
I am seeing a lot of entries which are similar. Are these something to
worry about?
The volume seems to be OK:
[root@ysmha02 export]# gluster v heal export info
Brick 10.0.1.7:/bricks/hdds/brick
Status: Connected
Number
Hello,
More informations :
With a volume of 20Go the auto mount works fine with a fstab like this
stogfstest-01:/GFSVOL /gfsvol glusterfs defaults,_netdev0 0
But with a 2.4Po that does not work. I need to add the backupvolfile-server and
it always mount through the backup
Hi All!
I'm try to use ovirt virtualisation platform with GlusterFS storage and Intel
Omni-Path "Infiniband" interfaces.
All packages version 3.12 from ovirt-4.2 repository, but I tried also gluster
4.1 from Centos centos-release-gluster41 repository.
Host are Centos 7.5.
glusterd crashes
07.11.2018 15:01, Mike Lykov пишет:
RDMA on its own seems working:
[root@ovirtnode5 log]# ib_write_bw -D 30 --cpu_util ovirtstor1
---
RDMA_Write BW Test
Dual-port : OFF