Am 16.12.2016 um 21:21 schrieb Riccardo Murri:
Hello,
Micha Ober ha scritto:
are you using the 3.7 branch since it was released or did you use
another version before?
The cluster was installed with 3.7, it didn't exist in 3.4-times.
(Actually, it's a short-lived cluster of
Hello,
Micha Ober ha scritto:
> are you using the 3.7 branch since it was released or did you use
> another version before?
The cluster was installed with 3.7, it didn't exist in 3.4-times.
(Actually, it's a short-lived cluster of VMs running on top of OpenStack.)
> I don't
Hi All,
I’ve a three nodes replica 3 cluster.
A network split happened which marked one of the three nodes offline on two
nodes. And this very node set itself as RO.
After the network split was fixed, the who cluster became healthy again, and
all the three peers status is connected on the three
Hello.
I have some LXC containers on two servers, each container placed in
personal glusterfs replication node. Gluster used as failover: when one
server is down i have possibility run container on second server without
loosing data.
In one of containers I have a Atlassian Confluence software,
Rafi,
Thanks, the .meta feature I didn't know is very nice. I finally have
captured debug logs from a client and bricks.
A mount log:
- http://pastebin.com/Tjy7wGGj
FYI rickdom126 is my client's hostname.
Brick logs around that time:
- Brick1: http://pastebin.com/qzbVRSF3
- Brick2:
Ok I find some documentation, I should have searched better :
https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Developer-guide/afr-self-heal-daemon/
Replying myself to some questions :
Q2.1 : files are hard linked in .gluster/index, so the recovery process is done
by the
Hello,
I am testing different uses cases where I am not sure to well understand how
Gluster (3.9 here) self healing works. The context is a dispersed 4+2 volume
“vol1” on 6 nodes gl[1..6], one brick per node.
1) while a client is reading a 5Go file F on vol1, the file on gl6 (actually a
1/4