We use glusterfs with open nebula on two kvm nodes with mirroring with 6
bricks per node. We recommend gluster 3.3 with a back end network
connection, 10GbE or IB. For stress testing the file system you can use
dbench on a cluster of VMs. Gluster 3.2 wasn't very robust to node failure
but 3.3 is
Hi,
Take a look at OpenNebula 3 Cloud Computing by Giovanni Toraldo
It describe Opennebula working with glusterfs and moose's. I have been
following it for my opennebula deployment.
Beh
On Fri, Nov 2, 2012 at 8:31 AM, Timothy Ehlers ehle...@gmail.com wrote:
Without a netapp what is the
Bought it, not that helpful explaining real world scenarios and how to
properly setup to avoid them. How does an instance react to a gluster node
failure? On my POC cloud, killing a server causes all the other boxes to
hang while the node times out in gluster.
On Nov 2, 2012 1:50 AM, Teik Hooi
Hi,
So far in my deployment, I have put all one server files into /var/lib/one
which sits in a 2 replica glusterfs setup. This way, if 1 server dies, my
VM are still running and I could start oneserver from the working server. I
am working now to user Opennebula hooks to move the VMs from failed
On Fri, Nov 2, 2012 at 9:46 AM, Timothy Ehlers ehle...@gmail.com wrote:
How does an instance react to a gluster node
failure? On my POC cloud, killing a server causes all the other boxes to
hang while the node times out in gluster.
This is a GlusterFS well-known configuration issue, you may
Without a netapp what is the recommended setup? I am looking at glusterFS
is this a common choice?
--
Tim Ehlers
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org