We use glusterfs with open nebula on two kvm nodes with mirroring with 6 bricks per node. We recommend gluster 3.3 with a back end network connection, 10GbE or IB. For stress testing the file system you can use dbench on a cluster of VMs. Gluster 3.2 wasn't very robust to node failure but 3.3 is far more stable. I would say its ready for development lab use and moderately heavy loads. For best performance, I would suggest using cgroups to shield the glusterfs processes from the kvm processes or use separate storage hosts.
On Fri, Nov 2, 2012 at 5:03 AM, Giovanni Toraldo <[email protected]> wrote: > On Fri, Nov 2, 2012 at 9:46 AM, Timothy Ehlers <[email protected]> wrote: > > How does an instance react to a gluster node > > failure? On my POC cloud, killing a server causes all the other boxes to > > hang while the node times out in gluster. > > This is a GlusterFS well-known configuration issue, you may read > documentation or ask about it on their ML. > > -- > Giovanni Toraldo > http://gionn.net > _______________________________________________ > Users mailing list > [email protected] > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org >
_______________________________________________ Users mailing list [email protected] http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
