Re: [one-users] Private cloud with Front-end on Virtual MAchine HA

2014-11-17 Thread Bart
Hi Daniel,

Thanks for the insight.

So basically, to sum it up, there is currently no way of running the
OpenNebula management node (with all functionality inside one VM) on it's
own virtualisation cluster (and thus managing itself along with the rest of
the cluster).

This would mean that you need two physical servers to create a (proper)
redundant (active / passive) setup?!

Thats a shame since we'd like to keep hardware to the minimum for this. But
if the best practice is to have two physical servers then I guess we'll
have to live with that and just maken it happen.

Although I'm still hoping for a better solution. So I'd be very interested
in hearing how others created their redundant OpenNebula management node,
is 2x server hardware the best practice or are there other solutions
(with less hardware) to achieve this?



2014-11-13 12:33 GMT+01:00 Daniel Dehennin daniel.dehen...@baby-gnu.org:

 Giancarlo De Filippis gdefilip...@ltbl.it writes:

  I hope (like you) that someone (users or OpenNebula Team) have best
  practices on how to run OpenNebula in a VM.

 Hello,

 The ONE frontend VM can not manage itself, you must use something else.

 I made a test with pacemaker/corosync and it can be quite easy[1]:

 #+begin_src conf
 primitive Stonith-ONE-Frontend stonith:external/libvirt \
 params hostlist=one-frontend hypervisor_uri=qemu:///system \
 pcmk_host_list=one-frontend pcmk_host_check=static-list \
 op monitor interval=30m
 primitive ONE-Frontend-VM ocf:heartbeat:VirtualDomain \
 params config=/var/lib/one/datastores/one/one.xml \
 op start interval=0 timeout=90 \
 op stop interval=0 timeout=100 \
 utilization cpu=1 hv_memory=1024
 group ONE-Frontend Stonith-ONE-Frontend ONE-Frontend-VM
 location ONE-Frontend-run-on-hypervisor ONE-Frontend \
 rule $id=ONE-Frontend-run-on-hypervisor-rule 40: #uname eq
 nebula1 \
 rule $id=ONE-Frontend-run-on-hypervisor-rule-0 30: #uname eq
 nebula3 \
 rule $id=ONE-Frontend-run-on-hypervisor-rule-1 20: #uname eq
 nebula2
 #+end_src

 I have troubles with my cluster because my nodes _and_ the ONE frontend
 needs to access the same SAN.

 My nodes have two LUNs over multipath FC (/dev/mapper/SAN-FS{1,2}), they
 are both PV of a cluster volume group (cLVM) with a GFS2 on top.

 So I need:

 - corosync for messaging
 - dlm for cLVM and GFS2
 - cLVM
 - GFS2

 I add the LUNs as raw block disks to my frontend VM and install the
 whole stack in it, but I'm facing some communication issues, and manage
 to solve somes[3].

 According to the pacemaker mailing list, having the nodes _and_ a VM in
 the same pacemaker/corosync cluster ”sounds like a recipe for
 disaster”[2].

 Hope this will help you have a picture of the topic.

 Regards.

 Footnotes:
 [1]
 http://clusterlabs.org/doc/en-US/Pacemaker/1.1-crmsh/html-single/Clusters_from_Scratch/index.html

 [2]
 http://oss.clusterlabs.org/pipermail/pacemaker/2014-November/023000.html

 [3]
 http://oss.clusterlabs.org/pipermail/pacemaker/2014-November/022964.html

 --
 Daniel Dehennin
 Récupérer ma clef GPG: gpg --recv-keys 0xCC1E9E5B7A6FE2DF
 Fingerprint: 3E69 014E 5C23 50E8 9ED6  2AAD CC1E 9E5B 7A6F E2DF

 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




-- 
Bart G.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Private cloud with Front-end on Virtual MAchine HA

2014-11-17 Thread Daniel Dehennin
Bart b...@pleh.info writes:

 Hi Daniel,

Hello,

 So basically, to sum it up, there is currently no way of running the
 OpenNebula management node (with all functionality inside one VM) on it's
 own virtualisation cluster (and thus managing itself along with the rest of
 the cluster).

You can run the VM on the same hardware, but it will not be managed by
OpenNebula, since you need OpenNebula to start OpenNebula VMs.

That's why the documentation[1] explains the setup of an H-A system.

If you have two physical servers, you may create two OpenNebula VMs, one
master and one slave with replicated database between them.

Regards.

Footnotes: 
[1]  
http://docs.opennebula.org/4.10/advanced_administration/high_availability/oneha.html

-- 
Daniel Dehennin
Récupérer ma clef GPG: gpg --recv-keys 0xCC1E9E5B7A6FE2DF
Fingerprint: 3E69 014E 5C23 50E8 9ED6  2AAD CC1E 9E5B 7A6F E2DF


signature.asc
Description: PGP signature
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Private cloud with Front-end on Virtual MAchine HA

2014-11-13 Thread Giancarlo De Filippis
 

Hi all, 

someone have some documentation for a sample of private cloud structure
with: 

- Two nodes in HA 

- Front-end on virtual machine HA 

With storage file system on glusterfs 

Thanks all.. 

GD ___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Private cloud with Front-end on Virtual MAchine HA

2014-11-13 Thread Bart
Hi,

Do you mean to have the frontend (OpenNebula management) running on the
actual OpenNebula cluster?

If that's the case then I would also be very interested in this scenario :)

As for GlusterFS, we've followed these instructions with success:


   - http://docs.opennebula.org/4.10/
   - http://docs.opennebula.org/4.10/administration/storage/gluster_ds.html


Also, all other instructions that we found were also on the documentation
site.

With it, we've created the following:


   - *GlusterFS:* Two storage hosts (running CentOS 7) in the back-end
   which provide the whole image/datastore for OpenNebula. HA is setup via
   round robin DNS. After testing we've found this to work quite well in terms
   of performance and redundancy (better then expected). All network ports are
   setup as a bond using balance-alb.
   - *Hypervisors: *a.t.m. 4 hypervisors (will eventually grow to 8)
   running CentOS 7. They have 2 network ports setup using bonding with
   balance-alb, they are connected to the virt/storage VLAN. And two network
   ports setup in a bond using active-backup, this is used for the bridge port
   (with VLAN tagging enabled). Active-backup seems to be the proper setting
   for a bridge port since a bridge can't work that well with balance-alb.
   - *OpenNebula management node:* This includes the OpenNebula daemon,
   sunstone and all other functions. Currently, since we're building up this
   environment. Hypervisors and storage have been arranged and we're happy
   with how those parts are setup, however the management node is currently
   still something we're not very sure about. We're considering buying two
   physical servers to do this job and create an active backup solution as
   described in the documentation:
   
http://docs.opennebula.org/4.10/advanced_administration/high_availability/oneha.html.
   However, buying hardware just for this seems a waste since these servers
   won't be doing that much (resource wise).
  - We do have plans on testing this node, dual, on the actual
  OpenNebula cluster. But something is holding us back. I can't find any
  proof of someone achieving this. Also, we're seeing some issues
that might
  occur when OpenNebula is managing itself (e.g. live migration, reboot,
  shutdown, etc.).
   - *Virtual machine HA setup: *We haven't actually started with this part
   yet, but this document describes on how you can create a setup where
   virtual machines become HA:
   
http://docs.opennebula.org/4.10/advanced_administration/high_availability/ftguide.html
   For us this is something we'll probably start looking into once we've found
   a proper setup for the OpenNebula management node.


Hopefully this info helps you a little and I also hope someone else can
elaborate on running OpenNebula as a VM on it's own cluster. Or, at least
other best practices on how to run OpenNebula in a VM without buying two
physical servers for this job, (any insight on this subject would be
helpful :)


-- Bart



2014-11-13 9:50 GMT+01:00 Giancarlo De Filippis gdefilip...@ltbl.it:

  Hi all,

 someone have some documentation for a sample of private cloud structure
 with:

 - Two nodes in HA

 - Front-end on virtual machine HA

 With storage file system on glusterfs

 Thanks all..

 GD

 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




-- 
Bart G.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Private cloud with Front-end on Virtual MAchine HA

2014-11-13 Thread Giancarlo De Filippis
 

Thanks so much Bart, 

That's the case. because vm running on cluster is more similar to
VMware HA solution (only two hosts without front-end) 

For the glusterfs i've already used this solution on my public cloud
with success. 

You helped me for port-setup of hypervisors :) 

I hope (like you) that someone (users or OpenNebula Team) have best
practices on how to run OpenNebula in a VM. 

Thks. 

Giancarlo 

Il 13/11/2014 11:38 Bart ha scritto: 

 Hi, 
 
 Do you mean to have the frontend (OpenNebula management) running on the 
 actual OpenNebula cluster? 
 
 If that's the case then I would also be very interested in this scenario :) 
 
 As for GlusterFS, we've followed these instructions with success: 
 
 * http://docs.opennebula.org/4.10/ [2]
 * http://docs.opennebula.org/4.10/administration/storage/gluster_ds.html [3]
 
 Also, all other instructions that we found were also on the documentation 
 site. 
 
 With it, we've created the following: 
 
 * GLUSTERFS: Two storage hosts (running CentOS 7) in the back-end which 
 provide the whole image/datastore for OpenNebula. HA is setup via round robin 
 DNS. After testing we've found this to work quite well in terms of 
 performance and redundancy (better then expected). All network ports are 
 setup as a bond using balance-alb.
 * HYPERVISORS: a.t.m. 4 hypervisors (will eventually grow to 8) running 
 CentOS 7. They have 2 network ports setup using bonding with balance-alb, 
 they are connected to the virt/storage VLAN. And two network ports setup in a 
 bond using active-backup, this is used for the bridge port (with VLAN tagging 
 enabled). Active-backup seems to be the proper setting for a bridge port 
 since a bridge can't work that well with balance-alb.
 
 * OPENNEBULA MANAGEMENT NODE: This includes the OpenNebula daemon, sunstone 
 and all other functions. Currently, since we're building up this environment. 
 Hypervisors and storage have been arranged and we're happy with how those 
 parts are setup, however the management node is currently still something 
 we're not very sure about. We're considering buying two physical servers to 
 do this job and create an active backup solution as described in the 
 documentation: 
 http://docs.opennebula.org/4.10/advanced_administration/high_availability/oneha.html
  [4]. However, buying hardware just for this seems a waste since these 
 servers won't be doing that much (resource wise). 
 
 * We do have plans on testing this node, dual, on the actual OpenNebula 
 cluster. But something is holding us back. I can't find any proof of someone 
 achieving this. Also, we're seeing some issues that might occur when 
 OpenNebula is managing itself (e.g. live migration, reboot, shutdown, etc.).
 
 * VIRTUAL MACHINE HA SETUP: We haven't actually started with this part yet, 
 but this document describes on how you can create a setup where virtual 
 machines become HA: 
 http://docs.opennebula.org/4.10/advanced_administration/high_availability/ftguide.html
  [5] For us this is something we'll probably start looking into once we've 
 found a proper setup for the OpenNebula management node.
 
 Hopefully this info helps you a little and I also hope someone else can 
 elaborate on running OpenNebula as a VM on it's own cluster. Or, at least 
 other best practices on how to run OpenNebula in a VM without buying two 
 physical servers for this job, (any insight on this subject would be helpful 
 :) 
 
 -- Bart 
 
 2014-11-13 9:50 GMT+01:00 Giancarlo De Filippis gdefilip...@ltbl.it:
 
 Hi all, 
 
 someone have some documentation for a sample of private cloud structure 
 with: 
 
 - Two nodes in HA 
 
 - Front-end on virtual machine HA 
 
 With storage file system on glusterfs 
 
 Thanks all.. 
 
 GD 
 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org [1]
 
 -- 
 
 Bart G.
 

Links:
--
[1] http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
[2] http://docs.opennebula.org/4.10/
[3]
http://docs.opennebula.org/4.10/administration/storage/gluster_ds.html
[4]
http://docs.opennebula.org/4.10/advanced_administration/high_availability/oneha.html
[5]
http://docs.opennebula.org/4.10/advanced_administration/high_availability/ftguide.html___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org