1) create the brick directory "/opt/gluster_data/eccp_glance" on the nodes where you deleted the directories.

2) From any of the storage node execute :

1. gluster volume start <volume_name> force  : To restart the brick process
2. gluster volume status <volume_name> : Check all the brick process
   are started.
3. gluster volume heal <volume_name> full : To trigger self-heal on to
   the removed bricks.

-Shwetha

On 11/28/2013 02:09 PM, 韦远科 wrote:
hi all,

I accidentally removed the brick directory of a volume on one node, the replica for this volume is 2.

now the situation is , there is no corresponding glusterfsd process on this node, and 'glusterfs volume status' shows that the brick is offline, like this:
Brick 192.168.64.11:/opt/gluster_data/eccp_glance    N/A  Y    2513
Brick 192.168.64.12:/opt/gluster_data/eccp_glance    49161  Y    2542
Brick 192.168.64.17:/opt/gluster_data/eccp_glance    49164  Y    2537
Brick 192.168.64.18:/opt/gluster_data/eccp_glance    49154  Y    4978
Brick 192.168.64.29:/opt/gluster_data/eccp_glance    N/A  N    N/A
Brick 192.168.64.30:/opt/gluster_data/eccp_glance    49154  Y    4072
Brick 192.168.64.25:/opt/gluster_data/eccp_glance    49155  Y    11975
Brick 192.168.64.26:/opt/gluster_data/eccp_glance    49155  Y    17947
Brick 192.168.64.13:/opt/gluster_data/eccp_glance    49154  Y    26045
Brick 192.168.64.14:/opt/gluster_data/eccp_glance    49154  Y    22143


so are there ways to bring this brick back to normal?

thanks!


-----------------------------------------------------------------
韦远科
中国科学院 计算机网络信息中心



_______________________________________________
Gluster-users mailing list
[email protected]
http://supercolony.gluster.org/mailman/listinfo/gluster-users


_______________________________________________
Gluster-users mailing list
[email protected]
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Reply via email to