Hi On Fri, Oct 28, 2011 at 2:11 AM, Karl Katzke <[email protected]> wrote: > Thanks! That helps with the VM pinning aspect greatly. I didn't realize that > the custom variables could be used in constraints. > > My other questions concerned a cold start of the cluster and the cluster > drivers. > > There is very little information about what happens during a cold start. In > our case, some dependencies need to start before other cloud/cluster services > can fully start. Is there a way to script this startup process so that we can > be sure that these depended-upon VMs are started first?
There is no tool that do this for OpenNebula 3.0. However it should be fairly easy to script that behavior for your services, either using the onevm commands or ruby - OCA api. For example, you can start a template wait till that VM is running then instantiate the other VMs that depends on the first one and so on. > > Second, what is the proper configuration in /etc/one/oned.conf to be using > the TM_LVM transfer manager driver simultaneously with the TM_SHARED transfer > manager? The manual and configuration files aren't clear on this point ... > They emphasize that multiple transfer managers can be in use unlike multiple > image manager drivers, but don't give an example of that configuration. Simply add them to oned.conf, uncommenting the TM lines that comes in oned.conf should be enough. Then when you create a host specify the TM driver (among those defined in oned.conf) that you want to use with that host. Cheers Ruben > > Thanks, > Karl Katzke > > Sent from my shoePhone. > > On Oct 27, 2011, at 17:25, "Ruben S. Montero" <[email protected]> > wrote: > >> Hi >> >> A couple of hints... >> >>> We also want those hosts to start first when a host or the cluster comes >>> back up. If >I’m reading the documentation correctly, that means that we >>> would use the rank >algorithm somehow — but by my read of the >>> documentation, that was only for host >selection, not boot priority. >> >> RANK/REQUIREMENTS is for host selection. But I am not sure that I >> really understand this. If you want to first use those host that has >> been recently rebooted. You could simply add a probe with uptime and >> RANK them based on that. >> >>> Manually coupling a particular VM to a host would also work, but I can’t >>> figure out how to do that yet within the scope of the “VM Template” scheme >>> either. >> >> You can add the names of the VMs you want to "pin" to a particular >> host in a special variable. Something like: >> >>> onehost update host0 >> >> # At the editor add >> RAID0_VMS="db_vm web_vm" >> >> Then a template for a VM "pinned" to that host would look like: >> >> NAME="db_vm" >> ... >> REQUIREMENTS="RAID0_VMS=\"*$NAME*\"" >> >> Instead of NAMES you could pinned them by MAC for example, just add >> RAID0_MACS="11:22:33:44:55:66 11:22:33:44:55:66" and the requirements >> would look like RAID0_MACS= "\"*$NIC[MAC]\*"" >> >>> >>> Last but not least, I’d like to have a the SSH transition, the LVM >>> transition manager, and the Shared transition manager all enabled. >> >> Yes simply remove comments for the TMs. >> >>> Thanks, >>> Karl Katzke >>> _______________________________________________ >>> Users mailing list >>> [email protected] >>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org >>> >>> >> >> >> >> -- >> Ruben S. Montero, PhD >> Project co-Lead and Chief Architect >> OpenNebula - The Open Source Toolkit for Cloud Computing >> www.OpenNebula.org | [email protected] > > -- Ruben S. Montero, PhD Project co-Lead and Chief Architect OpenNebula - The Open Source Toolkit for Cloud Computing www.OpenNebula.org | [email protected] | @OpenNebula _______________________________________________ Users mailing list [email protected] http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
