That's a very good point. I've been evaluating glusterfs from version 1.0 and refused to use it for one and only reason: the split-brain problem. With version 3.3 I have finally switched to glusterfs, but after a few months of production usage, I'm thinking of going back to separate servers with big raids.

/home/freecloud# time echo * |wc -w
87926

real    16m42.242s
user    0m0.384s
sys    0m0.072s

I just don't get it. Until version 3.3 - Why would I need openstack, qemu support etc etc when after one simple reboot I would loose part of my data.

On 11/6/12 11:35 AM, Fernando Frediani (Qube) wrote:
Joe,

I don't think we have to accept this as this is not acceptable thing. I have 
seen countless people complaining about this problem for a while and seems no 
improvements have been done.
The thing about the ramdisk although might help, looks more a chewing gun. I 
have seen other distributed filesystems that don't suffer for the same problem, 
so why Gluster have to ?

_______________________________________________
Gluster-users mailing list
[email protected]
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Reply via email to