Hello all

I'm testing glusterfs  3.3.0-1 on a couple of CentOS   6.3  servers,
than run KVM

after inserting a new empty brick due to a simulated failure, the
Self-healing process kicked in, as expected

after that however the VMs became mostly unsuable due to IO delay

it looks like the  Self-healing process doesn't let anything else run normally

I believe glusterfs  3.3 has some improvments to avoid this problem

is there some performe tunning that has to be done?

is there some specific command to start a special self-healing process
for systems that have large files (lke virtualization systems)?

thanks, best regards,
João

PS: this probably isn't a new problem: I've picked up the email
subject from a message dating Mar 24   2011
_______________________________________________
Gluster-users mailing list
[email protected]
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Reply via email to