I have a setup with 3 nodes running GlusterFS.
gluster volume create myBrick replica 3 node01:/mnt/data/myBrick
node02:/mnt/data/myBrick node03:/mnt/data/myBrick
Unfortunately node1 seemed to stop syncing with the other nodes, but this
was undetected for weeks!
When I noticed it, I did a
I have a setup with multiple hosts, each of them are administered
separately. So there are no unified uid/gid for the users.
When mounting a GlusterFS volume, a file owned by user1 on host1 might
become owned by user2 on host2.
I was looking into POSIX ACL or bindfs, but that won't help me much.
After I rebooted my GlusterFS servers I can’t connect from clients any more.
The volume is running, but I have to do a volume start FORCE on all server
hosts to make it work again.
I am running glusterfs 3.12.1 on Ubuntu 16.04.
Is this a bug?
Here are more details:
"gluster volume status"