Re: [one-users] Picking certain fields /USER/NAME out of $USER[TEMPLATE]
Hi, You can use $UNAME for the username, or $USER[ATTR] for an individual attribute of the user template [1]. Best regards [1] http://docs.opennebula.org/4.10/user/virtual_machine_setup/cong.html -- Carlos Martín, MSc Project Engineer OpenNebula - Flexible Enterprise Cloud Made Simple www.OpenNebula.org | cmar...@opennebula.org | @OpenNebula http://twitter.com/opennebula cmar...@opennebula.org On Thu, Nov 13, 2014 at 4:34 PM, Steven Timm t...@fnal.gov wrote: Under OpenNebula 3.2 we would include in the contextualization section the field $USER[TEMPLATE] and then add a contextualization script such that we would grab the field /USER/NAME out of the base64 encoded template information. In Opennebula 4.8 you can still put $USER[TEMPLATE] into your contextualization but now there is a lot more junk in $USER[TEMPLATE], namely all the key pairs that have been created via ec2CreateKeyPair for each user. At first we did not know what was happening because $USER[TEMPLATE] grew to over 300kb, a size that actually breaks bash! (you can't assign a shell variable to a value that big). So now the question is--is there a way to include only the user name field of the template in the contextualization section, and nothing else, through some combination of ruby syntax. If so, how? Steve Timm -- Steven C. Timm, Ph.D (630) 840-8525 t...@fnal.gov http://home.fnal.gov/~timm/ Office: Wilson Hall room 804 Fermilab Scientific Computing Division, Currently transitioning from: Scientific Computing Services Quadrant Grid and Cloud Services Dept., Associate Dept. Head for Cloud Computing To: Scientific Computing Facilities Quadrant., Experimental Computing Facilities Dept., Project Lead for Virtual Facility Project. ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] How to fix hosts with deleted /var/tmp/one files
Hi Steven, In several occasions in the past I have forgotten to tell my tmpwatch utility not to delete files out of /var/tmp/one and thus I have seen most of the remotes get deleted off of my cloud hosts, but opennebula 4.8 doesn't report any problems. You are right, when we decided to set /var/tmp/one as the default remotes directory we weren't aware of the tmpwatch utility. However, in our case, when we remove /var/tmp/one the driver resends the probes automatically: Mon Nov 17 11:07:04 2014 [Z0][InM][D]: Monitoring host localhost (0) Mon Nov 17 11:07:05 2014 [Z0][InM][I]: Command execution fail: 'if [ -x /var/tmp/one/im/run_probes ]; then /var/tmp/one/im/run_probes kvm /var/lib/one//datastores 4124 20 0 localhost; else exit 42; fi' Mon Nov 17 11:07:05 2014 [Z0][InM][I]: ExitCode: 42 Mon Nov 17 11:07:05 2014 [Z0][InM][I]: Remote worker node files not found Mon Nov 17 11:07:05 2014 [Z0][InM][I]: Updating remotes Mon Nov 17 11:07:10 2014 [Z0][InM][D]: Host localhost (0) successfully monitored. Can you find something similar to that in your log file? If you are not seeing something like that there might be a bug somewhere or a configuration problem. It would be great to narrow it down. I try to fix the remotes by doing onehost sync and nothing happens. I try to disable/enable the host and nothing happens there either. The only thing that seems to revive it is if I actually onehost delete/ onehost create. Right, you should do onehost sync --force Take a look at this section: http://docs.opennebula.org/4.10/administration/hosts_and_clusters/host_guide.html#sync In a nutshell you can version the probes and update specific nodes/cluster. If there is no version change onehost sync will not do anything unless --force is used. IS it possible for the remotes to add an integrity check? They should not be reporting the host is on if the datastores directory of remotes has been totally deleted. The automatic redeployment of probes is our way to deal with this. Let's see if we can see why it's no working for you. Also is it possible to configure the directory other than the default /var/tmp/one? Yes: /etc/one/oned.conf:71:SCRIPTS_REMOTE_DIR=/var/tmp/one Finally is it possible to redistribute the remotes short of a onehost delete / onehost create? With the --force flag. Cheers, Jaime -- Jaime Melis Project Engineer OpenNebula - Flexible Enterprise Cloud Made Simple www.OpenNebula.org | jme...@opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] How to fix hosts with deleted /var/tmp/one files
By the way, I'm considering the option of marking the 'tmpwatch' package as a conflict with opennebula-node-kvm. Do you think this is a good idea? On Mon, Nov 17, 2014 at 11:14 AM, Jaime Melis jme...@opennebula.org wrote: Hi Steven, In several occasions in the past I have forgotten to tell my tmpwatch utility not to delete files out of /var/tmp/one and thus I have seen most of the remotes get deleted off of my cloud hosts, but opennebula 4.8 doesn't report any problems. You are right, when we decided to set /var/tmp/one as the default remotes directory we weren't aware of the tmpwatch utility. However, in our case, when we remove /var/tmp/one the driver resends the probes automatically: Mon Nov 17 11:07:04 2014 [Z0][InM][D]: Monitoring host localhost (0) Mon Nov 17 11:07:05 2014 [Z0][InM][I]: Command execution fail: 'if [ -x /var/tmp/one/im/run_probes ]; then /var/tmp/one/im/run_probes kvm /var/lib/one//datastores 4124 20 0 localhost; else exit 42; fi' Mon Nov 17 11:07:05 2014 [Z0][InM][I]: ExitCode: 42 Mon Nov 17 11:07:05 2014 [Z0][InM][I]: Remote worker node files not found Mon Nov 17 11:07:05 2014 [Z0][InM][I]: Updating remotes Mon Nov 17 11:07:10 2014 [Z0][InM][D]: Host localhost (0) successfully monitored. Can you find something similar to that in your log file? If you are not seeing something like that there might be a bug somewhere or a configuration problem. It would be great to narrow it down. I try to fix the remotes by doing onehost sync and nothing happens. I try to disable/enable the host and nothing happens there either. The only thing that seems to revive it is if I actually onehost delete/ onehost create. Right, you should do onehost sync --force Take a look at this section: http://docs.opennebula.org/4.10/administration/hosts_and_clusters/host_guide.html#sync In a nutshell you can version the probes and update specific nodes/cluster. If there is no version change onehost sync will not do anything unless --force is used. IS it possible for the remotes to add an integrity check? They should not be reporting the host is on if the datastores directory of remotes has been totally deleted. The automatic redeployment of probes is our way to deal with this. Let's see if we can see why it's no working for you. Also is it possible to configure the directory other than the default /var/tmp/one? Yes: /etc/one/oned.conf:71:SCRIPTS_REMOTE_DIR=/var/tmp/one Finally is it possible to redistribute the remotes short of a onehost delete / onehost create? With the --force flag. Cheers, Jaime -- Jaime Melis Project Engineer OpenNebula - Flexible Enterprise Cloud Made Simple www.OpenNebula.org | jme...@opennebula.org -- Jaime Melis Project Engineer OpenNebula - Flexible Enterprise Cloud Made Simple www.OpenNebula.org | jme...@opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
[one-users] Changing a onehost name within opennebula
Hello, when I installed opennebula software I started with a single host, so it has all (sunstone, ...). onehost list ID NAMECLUSTER RVM ALLOCATED_CPU ALLOCATED_MEM STAT 0 localhost - 11 2100 / 3200 (65%) 28G / 62.9G (44%) on I would like to change 'localhost' to its real uname name. Will it be seamless? Or are there implications I have to address before? Thanks, regards, Sergi ** ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Changing a onehost name within opennebula
Sergi s...@okitup.com writes: Hello, Hello, [...] I would like to change 'localhost' to its real uname name. Will it be seamless? Or are there implications I have to address before? I think this has impact on SSH authentication. If you change localhost to $(hostname -s) then you must accept the key for the new name. Regards. -- Daniel Dehennin Récupérer ma clef GPG: gpg --recv-keys 0xCC1E9E5B7A6FE2DF Fingerprint: 3E69 014E 5C23 50E8 9ED6 2AAD CC1E 9E5B 7A6F E2DF signature.asc Description: PGP signature ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Private cloud with Front-end on Virtual MAchine HA
Hi Daniel, Thanks for the insight. So basically, to sum it up, there is currently no way of running the OpenNebula management node (with all functionality inside one VM) on it's own virtualisation cluster (and thus managing itself along with the rest of the cluster). This would mean that you need two physical servers to create a (proper) redundant (active / passive) setup?! Thats a shame since we'd like to keep hardware to the minimum for this. But if the best practice is to have two physical servers then I guess we'll have to live with that and just maken it happen. Although I'm still hoping for a better solution. So I'd be very interested in hearing how others created their redundant OpenNebula management node, is 2x server hardware the best practice or are there other solutions (with less hardware) to achieve this? 2014-11-13 12:33 GMT+01:00 Daniel Dehennin daniel.dehen...@baby-gnu.org: Giancarlo De Filippis gdefilip...@ltbl.it writes: I hope (like you) that someone (users or OpenNebula Team) have best practices on how to run OpenNebula in a VM. Hello, The ONE frontend VM can not manage itself, you must use something else. I made a test with pacemaker/corosync and it can be quite easy[1]: #+begin_src conf primitive Stonith-ONE-Frontend stonith:external/libvirt \ params hostlist=one-frontend hypervisor_uri=qemu:///system \ pcmk_host_list=one-frontend pcmk_host_check=static-list \ op monitor interval=30m primitive ONE-Frontend-VM ocf:heartbeat:VirtualDomain \ params config=/var/lib/one/datastores/one/one.xml \ op start interval=0 timeout=90 \ op stop interval=0 timeout=100 \ utilization cpu=1 hv_memory=1024 group ONE-Frontend Stonith-ONE-Frontend ONE-Frontend-VM location ONE-Frontend-run-on-hypervisor ONE-Frontend \ rule $id=ONE-Frontend-run-on-hypervisor-rule 40: #uname eq nebula1 \ rule $id=ONE-Frontend-run-on-hypervisor-rule-0 30: #uname eq nebula3 \ rule $id=ONE-Frontend-run-on-hypervisor-rule-1 20: #uname eq nebula2 #+end_src I have troubles with my cluster because my nodes _and_ the ONE frontend needs to access the same SAN. My nodes have two LUNs over multipath FC (/dev/mapper/SAN-FS{1,2}), they are both PV of a cluster volume group (cLVM) with a GFS2 on top. So I need: - corosync for messaging - dlm for cLVM and GFS2 - cLVM - GFS2 I add the LUNs as raw block disks to my frontend VM and install the whole stack in it, but I'm facing some communication issues, and manage to solve somes[3]. According to the pacemaker mailing list, having the nodes _and_ a VM in the same pacemaker/corosync cluster ”sounds like a recipe for disaster”[2]. Hope this will help you have a picture of the topic. Regards. Footnotes: [1] http://clusterlabs.org/doc/en-US/Pacemaker/1.1-crmsh/html-single/Clusters_from_Scratch/index.html [2] http://oss.clusterlabs.org/pipermail/pacemaker/2014-November/023000.html [3] http://oss.clusterlabs.org/pipermail/pacemaker/2014-November/022964.html -- Daniel Dehennin Récupérer ma clef GPG: gpg --recv-keys 0xCC1E9E5B7A6FE2DF Fingerprint: 3E69 014E 5C23 50E8 9ED6 2AAD CC1E 9E5B 7A6F E2DF ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Bart G. ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] network_pool out of sync with vm_pool ONE4.8
Hi, On Thu, Oct 23, 2014 at 3:25 AM, Steven C Timm t...@fnal.gov wrote: Due to an ongoing issue with having to purge the vm_pool of my database from time to time, I inadvertently did the purge while 8 vm's were still active and thus leases were still allocated. I ran onedb fsck to clean it up. It successfully cleaned up the host_pool table to show no running vm's. It also said that it updated the network_pool table to show no allocated leases but it did not do so. The network_pool table in ONE4 holds all 1000 leases of this particular vnet in a single row of the table. Correctly executing a manual mysql query to clean it up would be almost impossible. Suggestions on what to do to clean it up? Steve Timm Unfortunately there is no easy workaround. If you want to dig into the AR/ALLOCATED attribute, you will find a string with pairs of numbers: index binary. To identify the pairs you must delete, get the binary numbers that match the mask 0x0010. Then, the operation 0x will give you the VM ID. Best regards. -- Carlos Martín, MSc Project Engineer OpenNebula - Flexible Enterprise Cloud Made Simple www.OpenNebula.org http://www.opennebula.org/ | cmar...@opennebula.org | @OpenNebula http://twitter.com/opennebula ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Private cloud with Front-end on Virtual MAchine HA
Bart b...@pleh.info writes: Hi Daniel, Hello, So basically, to sum it up, there is currently no way of running the OpenNebula management node (with all functionality inside one VM) on it's own virtualisation cluster (and thus managing itself along with the rest of the cluster). You can run the VM on the same hardware, but it will not be managed by OpenNebula, since you need OpenNebula to start OpenNebula VMs. That's why the documentation[1] explains the setup of an H-A system. If you have two physical servers, you may create two OpenNebula VMs, one master and one slave with replicated database between them. Regards. Footnotes: [1] http://docs.opennebula.org/4.10/advanced_administration/high_availability/oneha.html -- Daniel Dehennin Récupérer ma clef GPG: gpg --recv-keys 0xCC1E9E5B7A6FE2DF Fingerprint: 3E69 014E 5C23 50E8 9ED6 2AAD CC1E 9E5B 7A6F E2DF signature.asc Description: PGP signature ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Picking certain fields /USER/NAME out of $USER[TEMPLATE]
Thanks Carlos, looks like that will work. Steve Timm On Mon, 17 Nov 2014, Carlos Martín Sánchez wrote: Hi, You can use $UNAME for the username, or $USER[ATTR] for an individual attribute of the user template [1]. Best regards [1] http://docs.opennebula.org/4.10/user/virtual_machine_setup/cong.html -- Carlos Martín, MSc Project Engineer OpenNebula - Flexible Enterprise Cloud Made Simple www.OpenNebula.org | cmar...@opennebula.org | @OpenNebula On Thu, Nov 13, 2014 at 4:34 PM, Steven Timm t...@fnal.gov wrote: Under OpenNebula 3.2 we would include in the contextualization section the field $USER[TEMPLATE] and then add a contextualization script such that we would grab the field /USER/NAME out of the base64 encoded template information. In Opennebula 4.8 you can still put $USER[TEMPLATE] into your contextualization but now there is a lot more junk in $USER[TEMPLATE], namely all the key pairs that have been created via ec2CreateKeyPair for each user. At first we did not know what was happening because $USER[TEMPLATE] grew to over 300kb, a size that actually breaks bash! (you can't assign a shell variable to a value that big). So now the question is--is there a way to include only the user name field of the template in the contextualization section, and nothing else, through some combination of ruby syntax. If so, how? Steve Timm -- Steven C. Timm, Ph.D (630) 840-8525 t...@fnal.gov http://home.fnal.gov/~timm/ Office: Wilson Hall room 804 Fermilab Scientific Computing Division, Currently transitioning from: Scientific Computing Services Quadrant Grid and Cloud Services Dept., Associate Dept. Head for Cloud Computing To: Scientific Computing Facilities Quadrant., Experimental Computing Facilities Dept., Project Lead for Virtual Facility Project. ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Steven C. Timm, Ph.D (630) 840-8525 t...@fnal.gov http://home.fnal.gov/~timm/ Office: Wilson Hall room 804 Fermilab Scientific Computing Division, Currently transitioning from: Scientific Computing Services Quadrant Grid and Cloud Services Dept., Associate Dept. Head for Cloud Computing To: Scientific Computing Facilities Quadrant., Experimental Computing Facilities Dept., Project Lead for Virtual Facility Project. ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] network_pool out of sync with vm_pool ONE4.8
On Mon, 17 Nov 2014, Carlos Martín Sánchez wrote: Hi, On Thu, Oct 23, 2014 at 3:25 AM, Steven C Timm t...@fnal.gov wrote: Due to an ongoing issue with having to purge the vm_pool of my database from time to time, I inadvertently did the purge while 8 vm's were still active and thus leases were still allocated. I ran onedb fsck to clean it up. It successfully cleaned up the host_pool table to show no running vm's. It also said that it updated the network_pool table to show no allocated leases but it did not do so. The network_pool table in ONE4 holds all 1000 leases of this particular vnet in a single row of the table. Correctly executing a manual mysql query to clean it up would be almost impossible. Suggestions on what to do to clean it up? Steve Timm Unfortunately there is no easy workaround. If you want to dig into the AR/ALLOCATED attribute, you will find a string with pairs of numbers: index binary. To identify the pairs you must delete, get the binary numbers that match the mask 0x0010. Then, the operation 0x will give you the VM ID. Best regards. -- Carlos Martín, MSc Project Engineer OpenNebula - Flexible Enterprise Cloud Made Simple www.OpenNebula.org | cmar...@opennebula.org | @OpenNebula It was a bit easier in my case because the cloud was empty at the moment and so i knew I had to delete all allocated pairs. so I dumped the table using mysqldump and then went in using a text editor to remove all allocated pairs. Then did a onedb fsck again to make sure the database was clean before restarting opennebula again. All is ok now. Steve \ -- Steven C. Timm, Ph.D (630) 840-8525 t...@fnal.gov http://home.fnal.gov/~timm/ Office: Wilson Hall room 804 Fermilab Scientific Computing Division, Currently transitioning from: Scientific Computing Services Quadrant Grid and Cloud Services Dept., Associate Dept. Head for Cloud Computing To: Scientific Computing Facilities Quadrant., Experimental Computing Facilities Dept., Project Lead for Virtual Facility Project. ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Changing a onehost name within opennebula
Also there are 11 VMs running on localhost, that name is in the history records of the VM and it will be used to issue VM actions. So it is important to double check the SSH auth as suggested by Daniel. Cheers On Mon Nov 17 2014 at 11:45:31 AM Daniel Dehennin daniel.dehen...@baby-gnu.org wrote: Sergi s...@okitup.com writes: Hello, Hello, [...] I would like to change 'localhost' to its real uname name. Will it be seamless? Or are there implications I have to address before? I think this has impact on SSH authentication. If you change localhost to $(hostname -s) then you must accept the key for the new name. Regards. -- Daniel Dehennin Récupérer ma clef GPG: gpg --recv-keys 0xCC1E9E5B7A6FE2DF Fingerprint: 3E69 014E 5C23 50E8 9ED6 2AAD CC1E 9E5B 7A6F E2DF ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org