Hi Abhishek,

Seems your peer 10.32.1.144 have disconnected while doing remove brick. see the 
below logs in glusterd:

[2016-02-18 10:37:02.816009] E [MSGID: 106256] 
[glusterd-brick-ops.c:1047:__glusterd_handle_remove_brick] 0-management: 
Incorrect brick 10.32.1.144:/opt/lvmdir/c2/brick for volume c_glusterfs 
[Invalid argument]
[2016-02-18 10:37:02.816061] E [MSGID: 106265] 
[glusterd-brick-ops.c:1088:__glusterd_handle_remove_brick] 0-management: 
Incorrect brick 10.32.1.144:/opt/lvmdir/c2/brick for volume c_glusterfs
The message "I [MSGID: 106004] 
[glusterd-handler.c:5065:__glusterd_peer_rpc_notify] 0-management: Peer 
<10.32.1.144> (<6adf57dc-c619-4e56-ae40-90e6aef75fe9>), in state <Peer in 
Cluster>, has disconnected from glusterd." repeated 25 times between 
[2016-02-18 10:35:43.131945] and [2016-02-18 10:36:58.160458]



If you are facing the same issue now, could you paste your # gluster peer 
status     command output here.

Thanks,
~Gaurav

----- Original Message -----
From: "ABHISHEK PALIWAL" <[email protected]>
To: [email protected]
Sent: Friday, February 19, 2016 2:46:35 PM
Subject: [Gluster-users] Issue in Adding/Removing the gluster node

Hi, 


I am working on two board setup connecting to each other. Gluster version 3.7.6 
is running and added two bricks in replica 2 mode but when I manually removed 
(detach) the one board from the setup I am getting the following error. 

volume remove-brick c_glusterfs replica 1 10.32.1.144:/opt/lvmdir/c2/brick 
force : FAILED : Incorrect brick 10.32.1.144:/opt/lvmdir/c2/brick for volume 
c_glusterfs 

Please find the logs file as an attachment. 


Regards, 
Abhishek 


_______________________________________________
Gluster-users mailing list
[email protected]
http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
[email protected]
http://www.gluster.org/mailman/listinfo/gluster-users

Reply via email to