It's local and is a standard AFR setup. I believe all the files are actually 
the same, but I'll verify this again. It just does this for a LOT of files, and 
they are all the same files (nothing has changed really).

About WAN: I have mostly given up on WAN replication at the moment, so I use 
glusterfs for local groups of machines that are on the same switch, and I use a 
separate solution to sync between WAN glusters.

So how do I delete without erasing the file from the entire gluster?

I'm assuming I need to:

1) Unmount all the clients
2) Erase and recreate /data/export on all nodes other than the chosen "master"
3) Remount the clients, and access the files

Is that right?


On Aug 4, 2010, at 4:14 AM, Tejas N. Bhise wrote:

> Is this over the WAN replicated setup ? Or a local setup ?
> 
> ----- Original Message -----
> From: "Count Zero" <[email protected]>
> To: "Gluster General Discussion List" <[email protected]>
> Sent: Wednesday, August 4, 2010 8:38:02 AM
> Subject: [Gluster-users] Split Brain?
> 
> I am seeing a lot of those in my cluster client's log file:
> 
> [2010-08-04 04:06:30] E [afr-self-heal-data.c:705:afr_sh_data_fix] replicate: 
> Unable to self-heal contents of '/lib/wms-server.jar' (possible split-brain). 
> Please delete the file from all but the preferred subvolume.
> 
> How do I recover from this without losing my files?
> 
> Thanks,
> CountZ
> 
> _______________________________________________
> Gluster-users mailing list
> [email protected]
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> _______________________________________________
> Gluster-users mailing list
> [email protected]
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
[email protected]
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Reply via email to