Hi Satheesaran,
gluster volume info engine
Volume Name: engine
Type: Replicate
Volume ID: 3caae601-74dd-40d1-8629-9a61072bec0f
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster0:/gluster/engine/brick
Brick2: gluster1:/gluster/engine/br
On Sat, Jun 24, 2017 at 3:17 PM, Abi Askushi
wrote:
> Hi all,
>
> For the records, I had to remove manually the conflicting directory and ts
> respective gfid from the arbiter volume:
>
> getfattr -m . -d -e hex e1c80750-b880-495e-9609-b8bc7760d101/ha_agent
>
> That gave me the gfid: 0x277c9caa9
Hi all,
For the records, I had to remove manually the conflicting directory and ts
respective gfid from the arbiter volume:
getfattr -m . -d -e hex e1c80750-b880-495e-9609-b8bc7760d101/ha_agent
That gave me the gfid: 0x277c9caa9dce4a17a2a93775357befd5
Then cd .glusterfs/27/7c
rm -rf 277c9caa-
Hi Denis,
I receive permission denied as below:
gluster volume heal engine split-brain latest-mtime
/e1c80750-b880-495e-9609-b8bc7760d101/ha_agent
Healing /e1c80750-b880-495e-9609-b8bc7760d101/ha_agent failed:Operation not
permitted.
Volume heal failed.
When I shutdown host3 then no split brain
Hello Abi,
On Fri, Jun 23, 2017 at 4:47 PM, Abi Askushi
wrote:
> Hi All,
>
> I have a 3 node ovirt 4.1 setup. I lost one node due to raid controller
> issues. Upon restoration I have the following split brain, although the
> hosts have mounted the storage domains:
>
> gluster volume heal engine
Hi All,
I have a 3 node ovirt 4.1 setup. I lost one node due to raid controller
issues. Upon restoration I have the following split brain, although the
hosts have mounted the storage domains:
gluster volume heal engine info split-brain
Brick gluster0:/gluster/engine/brick
/e1c80750-b880-495e-9609
6 matches
Mail list logo