This has been tested using the most recent build of 3.1.2 (built Jan 18 2011 
11:19:54)
System setup:
Volume Name: brick
Type: Distributed-Replicate
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: linguest2:/data/exp
Brick2: linguest3:/data/exp
Brick3: linguest4:/data/exp
Brick4: linguest5:/data/exp
This scenario is to remove both linguest4, linguest5 from the volume.

The command below is run to remove the two machines.
gluster volume remove-brick brick linguest4:/data/exp linguest5:/data/exp
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
Remove Brick successful
As soon as the above is run and completed I now get this log every 3 seconds.
[2011-02-09 10:28:43.957955] E [afr-common.c:2602:afr_notify] 
brick-replicate-1: All subvolumes are down. Going offline until atleast one of 
them comes back up.

That log will continue until i log onto all other machines and umount,mount the 
gluster mount point.
Also now that the two bricks have been removed there is data missing even 
though the documentation states "Distributed replicated volumes replicate 
(mirror) data across two or more nodes in the cluster" so I would have expected 
all the data to still be available.
The only difference with this setup is that the initial setup just had 
linguest2 and linguest3 then linguest4 and linguest5 were added later.
Thanks
Nick
_______________________________________________
Gluster-users mailing list
[email protected]
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Reply via email to