For the file/gfid that works (doesn’t return transport endpoint not connected),
ls -l on .gfid/9d9be7db-6576-45af-b211-32d1187a4e84 returns “-rw-r--r-- 1 root
root 60637 Feb 15 08:36 .gfid/9d9be7db-6576-45af-b211-32d1187a4e84” (ls -l on
.gfid returns “operation not supported”)
“getfattr -d -e
Hi Peter,
I think Staril means, running the command; hosted-engine --set-maintenance
--mode=local, this is also possible from the ovirt ui, via the ribbon on
the hosts section;
[image: image.png]
>From the log's it seems gluster has difficulty find the shared's, e.g.;
I have posted this problem exactly in Server Fault and in Linux NFS,
but it has not been answered yet. Maybe you can help me.
I have a situation with kernel NFS server. I have two exports with
exactly the same ACLs, with full permissions for the
informat...@adtest.xx.xx.xx group. One is
Hi Olaf, I tried running "gluster volume start hdd force" but sadly it did not change anything. the raid rebuild has finished now and everything seems to be fine:md6 : active raid6 sdu1[2] sdx1[5] sds1[0] sdt1[1] sdz1[7] sdv1[3] sdw1[4] sdaa1[8] sdy1[6] 68364119040 blocks super 1.2 level 6,
Hello Strahil, I tried restarting the glusterd.service on storage2 but it had no effect. What do you mean exactly with "set the node in maintenance"? Only the "ovirthostX" are available as compute hosts in oVirt. Or is that some other option in oVirt that I don't know about? The gluster volume