Re: [one-users] Sunstone on ONE 4.2 shows no virtual machines
Daniel, Nevermind, browser’s cache fault. Thanks --- Andreas Calvo Gómez Systems Engineer Skype/ andreas.calvo.gomez M/ +34 638 49 90 63 P/ +34 934 230 324 F/ +34 933 251 028 Scytl Secure Electronic Voting Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona http://www.scytl.com<http://www.scytl.com/> NOTICE: The information in this e-mail and in any of its attachments is confidential and intended solely for the attention and use of the named addressee(s). If you are not the intended recipient, any disclosure, copying, distribution or retaining of this message or any part of it, without the prior written consent of Scytl Secure Electronic Voting, SA is prohibited and may be unlawful. If you have received this in error, please contact the sender and delete the material from any computer. From: Users [mailto:users-boun...@lists.opennebula.org] On Behalf Of Andreas Calvo Sent: jueves, 12 de febrero de 2015 16:22 To: Daniel Molina Cc: users@lists.opennebula.org Subject: Re: [one-users] Sunstone on ONE 4.2 shows no virtual machines Daniel, It’s still failing. The main problem has been the ghost vulnerability found on glibc, which has triggered a controlled update of the bases system layer. It pulled newer releases for libraries on which opennebula sunstone relay. Any hint on how can I debug it in order to find which library or component is failing? Thanks a ton --- Andreas Calvo Gómez Systems Engineer Skype/ andreas.calvo.gomez M/ +34 638 49 90 63 P/ +34 934 230 324 F/ +34 933 251 028 Scytl Secure Electronic Voting Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona http://www.scytl.com<http://www.scytl.com/> NOTICE: The information in this e-mail and in any of its attachments is confidential and intended solely for the attention and use of the named addressee(s). If you are not the intended recipient, any disclosure, copying, distribution or retaining of this message or any part of it, without the prior written consent of Scytl Secure Electronic Voting, SA is prohibited and may be unlawful. If you have received this in error, please contact the sender and delete the material from any computer. From: Daniel Molina [mailto:dmol...@opennebula.org] Sent: jueves, 12 de febrero de 2015 12:05 To: Andreas Calvo Cc: users@lists.opennebula.org<mailto:users@lists.opennebula.org> Subject: Re: [one-users] Sunstone on ONE 4.2 shows no virtual machines Hi, That problem has been fixed in newer versions, upgrading is recommended. You can fixed it by changing the following line: https://github.com/OpenNebula/one/blob/release-4.2/src/sunstone/public/js/plugins/vms-tab.js#L3217 to if (graphics && graphics.TYPE && graphics.TYPE.toLowerCase() == "vnc" && $.inArray(state, VNCstates)!=-1){ Cheers On 11 February 2015 at 22:06, Andreas Calvo mailto:andreas.ca...@scytl.com>> wrote: Daniel, Thanks for your response. Attached you will find the errors shown when accessing the dashboard and virtual machines tab from sunstone. Regards, Andreas Calvo Gómez Systems Engineer Skype/ andreas.calvo.gomez M/ +34 638 499 063 P/ +34 934 230 324 F/ +34 933 251 028 Scytl Secure Electronic Voting Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona http://www.scytl.com NOTICE: The information in this e-mail and in any of its attachments is confidential and intended solely for the attention and use of the named addressee(s). If you are not the intended recipient, any disclosure, copying, distribution or retaining of this message or any part of it, without the prior written consent of Scytl Secure Electronic Voting, SA is prohibited and may be unlawful. If you have received this in error, please contact the sender and delete the material from any computer. From: Daniel Molina mailto:dmol...@opennebula.org>> Sent: 11 February 2015 18:47 To: Andreas Calvo Cc: users@lists.opennebula.org<mailto:users@lists.opennebula.org> Subject: Re: [one-users] Sunstone on ONE 4.2 shows no virtual machines Hi, Could you check in the browser developer console if there is any error? Cheers On 10 February 2015 at 18:00, Andreas Calvo mailto:andreas.ca...@scytl.com>> wrote: Hello, Since this morning, our Opennebula 4.2 Sunstone portal shows no virtual machines. The rest of the information (hosts, networks, templates, images, users, ACLs) are displayed correctly. Looking at the logs doesn't give much hint, all requests are 2xx and no error are handled either in sunstone.error or oned.error. Issuing a "onevm list" as the oneadmin users shows the current list of VMs running. Any hint or file to look? Thanks -- Andreas Calvo Gómez Systems Engineer Skype/ andreas.calvo.gomez M/ +34 638 499 063 P/ +34 934 230 324 F/ +34 933 251 028 Scytl Secure Electronic Voting Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona http://www.scytl.com NOTICE: The information in this e-mail a
Re: [one-users] Sunstone on ONE 4.2 shows no virtual machines
Daniel, It’s still failing. The main problem has been the ghost vulnerability found on glibc, which has triggered a controlled update of the bases system layer. It pulled newer releases for libraries on which opennebula sunstone relay. Any hint on how can I debug it in order to find which library or component is failing? Thanks a ton --- Andreas Calvo Gómez Systems Engineer Skype/ andreas.calvo.gomez M/ +34 638 49 90 63 P/ +34 934 230 324 F/ +34 933 251 028 Scytl Secure Electronic Voting Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona http://www.scytl.com<http://www.scytl.com/> NOTICE: The information in this e-mail and in any of its attachments is confidential and intended solely for the attention and use of the named addressee(s). If you are not the intended recipient, any disclosure, copying, distribution or retaining of this message or any part of it, without the prior written consent of Scytl Secure Electronic Voting, SA is prohibited and may be unlawful. If you have received this in error, please contact the sender and delete the material from any computer. From: Daniel Molina [mailto:dmol...@opennebula.org] Sent: jueves, 12 de febrero de 2015 12:05 To: Andreas Calvo Cc: users@lists.opennebula.org Subject: Re: [one-users] Sunstone on ONE 4.2 shows no virtual machines Hi, That problem has been fixed in newer versions, upgrading is recommended. You can fixed it by changing the following line: https://github.com/OpenNebula/one/blob/release-4.2/src/sunstone/public/js/plugins/vms-tab.js#L3217 to if (graphics && graphics.TYPE && graphics.TYPE.toLowerCase() == "vnc" && $.inArray(state, VNCstates)!=-1){ Cheers On 11 February 2015 at 22:06, Andreas Calvo mailto:andreas.ca...@scytl.com>> wrote: Daniel, Thanks for your response. Attached you will find the errors shown when accessing the dashboard and virtual machines tab from sunstone. Regards, Andreas Calvo Gómez Systems Engineer Skype/ andreas.calvo.gomez M/ +34 638 499 063 P/ +34 934 230 324 F/ +34 933 251 028 Scytl Secure Electronic Voting Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona http://www.scytl.com NOTICE: The information in this e-mail and in any of its attachments is confidential and intended solely for the attention and use of the named addressee(s). If you are not the intended recipient, any disclosure, copying, distribution or retaining of this message or any part of it, without the prior written consent of Scytl Secure Electronic Voting, SA is prohibited and may be unlawful. If you have received this in error, please contact the sender and delete the material from any computer. From: Daniel Molina mailto:dmol...@opennebula.org>> Sent: 11 February 2015 18:47 To: Andreas Calvo Cc: users@lists.opennebula.org<mailto:users@lists.opennebula.org> Subject: Re: [one-users] Sunstone on ONE 4.2 shows no virtual machines Hi, Could you check in the browser developer console if there is any error? Cheers On 10 February 2015 at 18:00, Andreas Calvo mailto:andreas.ca...@scytl.com>> wrote: Hello, Since this morning, our Opennebula 4.2 Sunstone portal shows no virtual machines. The rest of the information (hosts, networks, templates, images, users, ACLs) are displayed correctly. Looking at the logs doesn't give much hint, all requests are 2xx and no error are handled either in sunstone.error or oned.error. Issuing a "onevm list" as the oneadmin users shows the current list of VMs running. Any hint or file to look? Thanks -- Andreas Calvo Gómez Systems Engineer Skype/ andreas.calvo.gomez M/ +34 638 499 063 P/ +34 934 230 324 F/ +34 933 251 028 Scytl Secure Electronic Voting Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona http://www.scytl.com NOTICE: The information in this e-mail and in any of its attachments is confidential and intended solely for the attention and use of the named addressee(s). If you are not the intended recipient, any disclosure, copying, distribution or retaining of this message or any part of it, without the prior written consent of Scytl Secure Electronic Voting, SA is prohibited and may be unlawful. If you have received this in error, please contact the sender and delete the material from any computer. ___ Users mailing list Users@lists.opennebula.org<mailto:Users@lists.opennebula.org> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- -- Daniel Molina Project Engineer OpenNebula - Flexible Enterprise Cloud Made Simple www.OpenNebula.org<http://www.OpenNebula.org> | dmol...@opennebula.org<mailto:dmol...@opennebula.org> | @OpenNebula -- -- Daniel Molina Project Engineer OpenNebula - Flexible Enterprise Cloud Made Simple www.OpenNebula.org<http://www.OpenNebula.org> | dmol...@opennebula.org<mailto:dmol...@opennebula.org> | @
Re: [one-users] DataTables warning message Sunstone
I'm experimenting something similar. Which version and base operating system are you using? Sent from OWA on Android From: Users on behalf of Stefan Kooman Sent: 11 February 2015 22:32:33 To: users@lists.opennebula.org Subject: [one-users] DataTables warning message Sunstone Hi List, If I log into sunstone (as oneadmin) I receive the following message: DataTables warning: table id=datatable_marketplace - Requested unknown parameter 'files.0.os-arch' for row 27. For more information about this error, please see http://datatables.net/tn/4 Is this (public) "marketplace" related or is something else messed up? Anyone has ever seen this? Gr. Stefan -- | BIT BV http://www.bit.nl/Kamer van Koophandel 09090351 | GPG: 0xD14839C6 +31 318 648 688 / i...@bit.nl ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
[one-users] Sunstone on ONE 4.2 shows no virtual machines
Hello, Since this morning, our Opennebula 4.2 Sunstone portal shows no virtual machines. The rest of the information (hosts, networks, templates, images, users, ACLs) are displayed correctly. Looking at the logs doesn't give much hint, all requests are 2xx and no error are handled either in sunstone.error or oned.error. Issuing a "onevm list" as the oneadmin users shows the current list of VMs running. Any hint or file to look? Thanks -- Andreas Calvo Gómez Systems Engineer Skype/ andreas.calvo.gomez M/ +34 638 499 063 P/ +34 934 230 324 F/ +34 933 251 028 Scytl Secure Electronic Voting Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona http://www.scytl.com NOTICE: The information in this e-mail and in any of its attachments is confidential and intended solely for the attention and use of the named addressee(s). If you are not the intended recipient, any disclosure, copying, distribution or retaining of this message or any part of it, without the prior written consent of Scytl Secure Electronic Voting, SA is prohibited and may be unlawful. If you have received this in error, please contact the sender and delete the material from any computer. ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Sharing images between clusters
Carlos, Thanks for your answer. From what I understand, the Image Datastore must be accesible from the hosts selected (all or just a cluster). However, the system datastore must be shared between the cluster (store transient data). On my scenario, clusters do not share the same Image Datastore nor can access the same resources. System datastore was planned to be local (on the DMZ node's filesystem), and the image to be transferred over the network. So, when moving an image from one cluster to the other (or starting a VM in the DMZ cluster), can Opennebula's manager copy over the network an image that's stored using iSCSI (image datastore) to the DMZ's node filesystem (system datastore), giving the premise that the DMZ node cannot access iSCSI? Thanks --- Andreas Calvo Gómez Systems Engineer Skype/ andreas.calvo.gomez M/ +34 638 49 90 63 P/ +34 934 230 324 F/ +34 933 251 028 Scytl Secure Electronic Voting Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona http://www.scytl.com NOTICE: The information in this e-mail and in any of its attachments is confidential and intended solely for the attention and use of the named addressee(s). If you are not the intended recipient, any disclosure, copying, distribution or retaining of this message or any part of it, without the prior written consent of Scytl Secure Electronic Voting, SA is prohibited and may be unlawful. If you have received this in error, please contact the sender and delete the material from any computer. - Original Message - From: "Carlos Martín Sánchez" To: "Andreas Calvo Gómez" Cc: users@lists.opennebula.org Sent: Monday, June 9, 2014 6:52:23 PM Subject: Re: [one-users] Sharing images between clusters Hi, On Thu, Jun 5, 2014 at 11:31 AM, Andreas Calvo Gómez < andreas.ca...@scytl.com> wrote: > Hi, > I'm trying to set up a DMZ zone and I'm struggling with datastores and TMs. > The main goal is to provide an internal zone to work with VMs, and the > publish the final image to the DMZ zone. > Then, while starting the VM, select the cluster to deploy the image (using > maybe SSH TM). > > Would it be possible to share images between clusters? > > Thanks! > > -- > Andreas Calvo Gómez > Systems Engineer > > The Image Datastore cannot be in more than one cluster. You can however move it to the none (-1) cluster, what makes it accessible to all hosts. System Datastores can be selected for each deployment, but depending on the Image Datastore the image may be used directly from the Image DS, not the system one. There's more info about this in the storage section of the docs [1], but feel free to come back with any questions. Regards [1] http://docs.opennebula.org/4.6/administration/storage/sm.html -- Carlos Martín, MSc Project Engineer OpenNebula - Flexible Enterprise Cloud Made Simple www.OpenNebula.org <http://www.opennebula.org/> | cmar...@opennebula.org | @OpenNebula <http://twitter.com/opennebula> ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
[one-users] Sharing images between clusters
Hi, I'm trying to set up a DMZ zone and I'm struggling with datastores and TMs. The main goal is to provide an internal zone to work with VMs, and the publish the final image to the DMZ zone. Then, while starting the VM, select the cluster to deploy the image (using maybe SSH TM). Would it be possible to share images between clusters? Thanks! -- Andreas Calvo Gómez Systems Engineer Skype/ andreas.calvo.gomez M/ +34 638 499 063 P/ +34 934 230 324 F/ +34 933 251 028 Scytl Secure Electronic Voting Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona http://www.scytl.com NOTICE: The information in this e-mail and in any of its attachments is confidential and intended solely for the attention and use of the named addressee(s). If you are not the intended recipient, any disclosure, copying, distribution or retaining of this message or any part of it, without the prior written consent of Scytl Secure Electronic Voting, SA is prohibited and may be unlawful. If you have received this in error, please contact the sender and delete the material from any computer. ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Network addressing and IP recognition in ONE
Jaime, Thanks, Ill try to have some script that reads from one end and feeds the other end. However, do you foresee useful to have some kind of daemon inside the VM that will update the IP address shown in ONE? For instance, a VM is configured with a context that has the IP and port of a REST service on ONE. Once the ONE daemon inside the VM has started and it gets a valid IP address, it pushes that information to ONE, so the IP address shown in the WEBUI is valid. I assume there will be some other dependencies VMID, for instance but it could be a nice feature to deploy VMs in a mixing environment, or to have a centralized network server. It may not work with all network drivers, as it stills requires to have a virtual network assigned with VLAN tagging not having a virtual network or some tagging at layer 2 would mean defining manually the MAC-IP pairs. Thanks! --- Andreas Calvo Gómez Systems Engineer Scytl Secure Electronic Voting Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona Phone: + 34 934 230 324 Fax: + 34 933 251 028 <http://www.scytl.com/> http://www.scytl.com NOTICE: The information in this e-mail and in any of its attachments is confidential and intended solely for the attention and use of the named addressee(s). If you are not the intended recipient, any disclosure, copying, distribution or retaining of this message or any part of it, without the prior written consent of Scytl Secure Electronic Voting, SA is prohibited and may be unlawful. If you have received this in error, please contact the sender and delete the material from any computer. From: Jaime Melis [mailto:jme...@opennebula.org] Sent: miércoles, 16 de octubre de 2013 16:55 To: Andreas Calvo Gómez Cc: users Subject: Re: [one-users] Network addressing and IP recognition in ONE Hi Andreas, no, that's not currently supported by OpenNebula. You can use however user-defined fields to store any information. If you do "onevm update " you can add any key=value. If you have a third-party tool that reads from the template that user-defined field, you can simulate this feature. cheers On Thu, Oct 3, 2013 at 1:02 PM, Andreas Calvo Gómez wrote: Hello all, Network addressing under ONE (with any of the drivers) seems target to isolated/dedicated networks, where one can assume all IP address can be guessed by the MAC. However, when mixing a working network segment with ONE, you have to sacrifice something to get it working. Currently, there are 3 options: - Virtual Router - Virtual Network with fixed range - Virtual Network without range None of the mentioned options will fit with an external network services server (DHCP, DNS) and let ONE know the right address of a VM. Would it be possible to, given a VM and all context parameters, push the IP information into ONE? In a more practical approach: during the startup of the OS, get a valid IP address, and during the context script, push that information to ONE. Seems a little bit more dynamic, since nothing is hardcoded either in Virtual Networks or in the network service (DHCP), plus it gives the ability to control from outside and integrated with other services. Thanks!! ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Jaime Melis Project Engineer OpenNebula - Flexible Enterprise Cloud Made Simple www.OpenNebula.org | jme...@opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Network addressing and IP recognition in ONE
Dmtri, It helps but still keeps the same behaviour: you have to define beforehand the relation between MACs and IPs either in ONE or in your DHCP server. --- Andreas Calvo Gómez Systems Engineer Scytl Secure Electronic Voting Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona Phone: + 34 934 230 324 Fax: + 34 933 251 028 <http://www.scytl.com/> http://www.scytl.com NOTICE: The information in this e-mail and in any of its attachments is confidential and intended solely for the attention and use of the named addressee(s). If you are not the intended recipient, any disclosure, copying, distribution or retaining of this message or any part of it, without the prior written consent of Scytl Secure Electronic Voting, SA is prohibited and may be unlawful. If you have received this in error, please contact the sender and delete the material from any computer. From: Dmitri Chebotarov [mailto:dcheb...@gmu.edu] Sent: miércoles, 16 de octubre de 2013 21:51 To: Jaime Melis; Andreas Calvo Gómez Cc: users Subject: RE: [one-users] Network addressing and IP recognition in ONE Hi Andreas, I don't know if this helps - when you create a VM you can request specific IP address for the VM from the VNET, i.e.: NIC = [ NETWORK_ID = 1, IP = 192.168.0.3 ] Thanks, _ From: users-boun...@lists.opennebula.org on behalf of Jaime Melis Sent: Wednesday, October 16, 2013 10:54 AM To: Andreas Calvo Gómez Cc: users Subject: Re: [one-users] Network addressing and IP recognition in ONE Hi Andreas, no, that's not currently supported by OpenNebula. You can use however user-defined fields to store any information. If you do "onevm update " you can add any key=value. If you have a third-party tool that reads from the template that user-defined field, you can simulate this feature. cheers On Thu, Oct 3, 2013 at 1:02 PM, Andreas Calvo Gómez wrote: Hello all, Network addressing under ONE (with any of the drivers) seems target to isolated/dedicated networks, where one can assume all IP address can be guessed by the MAC. However, when mixing a working network segment with ONE, you have to sacrifice something to get it working. Currently, there are 3 options: - Virtual Router - Virtual Network with fixed range - Virtual Network without range None of the mentioned options will fit with an external network services server (DHCP, DNS) and let ONE know the right address of a VM. Would it be possible to, given a VM and all context parameters, push the IP information into ONE? In a more practical approach: during the startup of the OS, get a valid IP address, and during the context script, push that information to ONE. Seems a little bit more dynamic, since nothing is hardcoded either in Virtual Networks or in the network service (DHCP), plus it gives the ability to control from outside and integrated with other services. Thanks!! ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Jaime Melis Project Engineer OpenNebula - Flexible Enterprise Cloud Made Simple www.OpenNebula.org | jme...@opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
[one-users] Network addressing and IP recognition in ONE
Hello all, Network addressing under ONE (with any of the drivers) seems target to isolated/dedicated networks, where one can assume all IP address can be guessed by the MAC. However, when mixing a working network segment with ONE, you have to sacrifice something to get it working. Currently, there are 3 options: - Virtual Router - Virtual Network with fixed range - Virtual Network without range None of the mentioned options will fit with an external network services server (DHCP, DNS) and let ONE know the right address of a VM. Would it be possible to, given a VM and all context parameters, push the IP information into ONE? In a more practical approach: during the startup of the OS, get a valid IP address, and during the context script, push that information to ONE. Seems a little bit more dynamic, since nothing is hardcoded either in Virtual Networks or in the network service (DHCP), plus it gives the ability to control from outside and integrated with other services. Thanks!! ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] LDAP/AD authentication problems
Javier, I've tried successfully using the ldap client utils: With invalid password: oneadmin@opennebula:~$ ldapsearch -h ad.mydomain.com -D cn=acalvo,cn=Users,dc=mydomain,dc=com -W Enter LDAP Password: ldap_bind: Invalid credentials (49) additional info: Simple Bind Failed: NT_STATUS_LOGON_FAILURE With valid password: oneadmin@opennebula:~$ ldapsearch -h ad.mydomain.com -D cn=acalvo,cn=Users,dc=mydomain,dc=com -W Enter LDAP Password: # extended LDIF # # LDAPv3 # base <> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # We are not using SSL at this moment (I think it was advised not to use it in the documentation). LDAP configuration: :user: 'cn=readonly,cn=users,dc=mydomain,dc=com' :password: 'mybindpassword' :auth_method: :simple :host: ad.mydomain.com :port: 389 :base: 'cn=Users,dc=mydomain,dc=com' AD configuration: :user: 'reado...@mydomain.com' :password: 'mybindpassword' :auth_method: :simple :host: ad.mydomain.com :port: 389 #:encryption: :simple_tls :base: 'cn=Users,dc=mydomain,dc=com' In both cases, the output is the same: oneadmin@opennebula:~$ ./remotes/auth/default/authenticate acalvo badpassword badpassword Trying server server 1 ldap acalvo CN=acalvo,CN=Users,DC=mydomain,DC=com Cheers On 01/10/13 11:56, Javier Fontan wrote: Can you check with ldapsearch command? Can you authenticate with that command and an invalid password? Are you using ssl? For our tests we use slapd as ldap server and a Windows 2008 Server as Active Directory server. On Tue, Oct 1, 2013 at 9:52 AM, Andreas Calvo Gómez wrote: Javier, We are not using a true AD; instead, we are using Samba 4 as an AD. However, it fails either being configured as AD or just plain LDAP. I may provide the configuration if necessary, just let me know. Regards, On 24/09/13 10:56, Javier Fontan wrote: I've tested the driver from 4.2 with a Windows 2008 server Active directory and does fail when the password is not correct. Could it be an Active Directory configuration? On Fri, Sep 6, 2013 at 4:57 PM, Andreas Calvo Gómez wrote: Javier, Thanks for your time. We are running the latest version of OpenNebula as of today: version 4.2.0. On 06/09/13 15:23, Javier Fontan wrote: It looks really bad. Could you please give use the OpenNebula version you are using? I'll do my tests here and will let you know. I've created a ticket to keep track of this problem: http://dev.opennebula.org/issues/2307 On Wed, Aug 28, 2013 at 6:46 PM, Andreas Calvo Gómez wrote: Hi all, I've encountered a strange behavior while trying to configure ONE to authenticate against an AD, either as a proper AD or as a LDAP. If a credential is used to query LDAP and retrieve the complete DN for the user that wants to login, then no matter what password the user has typed it will be listed as authenticated. ldap_auth.conf example: server 1: :user: 'myu...@mydomain.com' :password: 'mypassword' :auth_method: :simple :host: ad.mydomain.com :port: 389 :base: 'dc=mydomain,dc=com' :user_field: 'sAMAccountName' :order: - server 1 If I manually query the authenticate process with a made up password and secret, it is always listed as authenticated. For instance: oneadmin@opennebula:~$ ./remotes/auth/default/authenticate myuser badpassword badpassword Trying server server 1 ldap myuser CN=myuser,CN=Users,DC=mydomain,DC=com My guess is that the same user that is used to look up users, performs the authenticate method and always returns a valid user. Or maybe I'm missing something. Any hint? Thanks! ___________ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Andreas Calvo Gómez Systems Engineer Scytl Secure Electronic Voting Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona Phone: + 34 934 230 324 Fax: + 34 933 251 028 http://www.scytl.com NOTICE: The information in this e-mail and in any of its attachments is confidential and intended solely for the attention and use of the named addressee(s). If you are not the intended recipient, any disclosure, copying, distribution or retaining of this message or any part of it, without the prior written consent of Scytl Secure Electronic Voting, SA is prohibited and may be unlawful. If you have received this in error, please contact the sender and delete the material from any computer. ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] LDAP/AD authentication problems
Javier, We are not using a true AD; instead, we are using Samba 4 as an AD. However, it fails either being configured as AD or just plain LDAP. I may provide the configuration if necessary, just let me know. Regards, On 24/09/13 10:56, Javier Fontan wrote: I've tested the driver from 4.2 with a Windows 2008 server Active directory and does fail when the password is not correct. Could it be an Active Directory configuration? On Fri, Sep 6, 2013 at 4:57 PM, Andreas Calvo Gómez wrote: Javier, Thanks for your time. We are running the latest version of OpenNebula as of today: version 4.2.0. On 06/09/13 15:23, Javier Fontan wrote: It looks really bad. Could you please give use the OpenNebula version you are using? I'll do my tests here and will let you know. I've created a ticket to keep track of this problem: http://dev.opennebula.org/issues/2307 On Wed, Aug 28, 2013 at 6:46 PM, Andreas Calvo Gómez wrote: Hi all, I've encountered a strange behavior while trying to configure ONE to authenticate against an AD, either as a proper AD or as a LDAP. If a credential is used to query LDAP and retrieve the complete DN for the user that wants to login, then no matter what password the user has typed it will be listed as authenticated. ldap_auth.conf example: server 1: :user: 'myu...@mydomain.com' :password: 'mypassword' :auth_method: :simple :host: ad.mydomain.com :port: 389 :base: 'dc=mydomain,dc=com' :user_field: 'sAMAccountName' :order: - server 1 If I manually query the authenticate process with a made up password and secret, it is always listed as authenticated. For instance: oneadmin@opennebula:~$ ./remotes/auth/default/authenticate myuser badpassword badpassword Trying server server 1 ldap myuser CN=myuser,CN=Users,DC=mydomain,DC=com My guess is that the same user that is used to look up users, performs the authenticate method and always returns a valid user. Or maybe I'm missing something. Any hint? Thanks! ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Andreas Calvo Gómez Systems Engineer Scytl Secure Electronic Voting Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona Phone: + 34 934 230 324 Fax: + 34 933 251 028 http://www.scytl.com NOTICE: The information in this e-mail and in any of its attachments is confidential and intended solely for the attention and use of the named addressee(s). If you are not the intended recipient, any disclosure, copying, distribution or retaining of this message or any part of it, without the prior written consent of Scytl Secure Electronic Voting, SA is prohibited and may be unlawful. If you have received this in error, please contact the sender and delete the material from any computer. ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] LDAP/AD authentication problems
Javier, Thanks for your time. We are running the latest version of OpenNebula as of today: version 4.2.0. On 06/09/13 15:23, Javier Fontan wrote: It looks really bad. Could you please give use the OpenNebula version you are using? I'll do my tests here and will let you know. I've created a ticket to keep track of this problem: http://dev.opennebula.org/issues/2307 On Wed, Aug 28, 2013 at 6:46 PM, Andreas Calvo Gómez wrote: Hi all, I've encountered a strange behavior while trying to configure ONE to authenticate against an AD, either as a proper AD or as a LDAP. If a credential is used to query LDAP and retrieve the complete DN for the user that wants to login, then no matter what password the user has typed it will be listed as authenticated. ldap_auth.conf example: server 1: :user: 'myu...@mydomain.com' :password: 'mypassword' :auth_method: :simple :host: ad.mydomain.com :port: 389 :base: 'dc=mydomain,dc=com' :user_field: 'sAMAccountName' :order: - server 1 If I manually query the authenticate process with a made up password and secret, it is always listed as authenticated. For instance: oneadmin@opennebula:~$ ./remotes/auth/default/authenticate myuser badpassword badpassword Trying server server 1 ldap myuser CN=myuser,CN=Users,DC=mydomain,DC=com My guess is that the same user that is used to look up users, performs the authenticate method and always returns a valid user. Or maybe I'm missing something. Any hint? Thanks! ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Andreas Calvo Gómez Systems Engineer Scytl Secure Electronic Voting Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona Phone: + 34 934 230 324 Fax: + 34 933 251 028 http://www.scytl.com NOTICE: The information in this e-mail and in any of its attachments is confidential and intended solely for the attention and use of the named addressee(s). If you are not the intended recipient, any disclosure, copying, distribution or retaining of this message or any part of it, without the prior written consent of Scytl Secure Electronic Voting, SA is prohibited and may be unlawful. If you have received this in error, please contact the sender and delete the material from any computer. ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
[one-users] LDAP/AD authentication problems
Hi all, I've encountered a strange behavior while trying to configure ONE to authenticate against an AD, either as a proper AD or as a LDAP. If a credential is used to query LDAP and retrieve the complete DN for the user that wants to login, then no matter what password the user has typed it will be listed as authenticated. ldap_auth.conf example: server 1: :user: 'myu...@mydomain.com' :password: 'mypassword' :auth_method: :simple :host: ad.mydomain.com :port: 389 :base: 'dc=mydomain,dc=com' :user_field: 'sAMAccountName' :order: - server 1 If I manually query the authenticate process with a made up password and secret, it is always listed as authenticated. For instance: oneadmin@opennebula:~$ ./remotes/auth/default/authenticate myuser badpassword badpassword Trying server server 1 ldap myuser CN=myuser,CN=Users,DC=mydomain,DC=com My guess is that the same user that is used to look up users, performs the authenticate method and always returns a valid user. Or maybe I'm missing something. Any hint? Thanks! ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Monitoring OpenNebula deployments with Ganglia
Javier, Again, you were right. Querying Ganglia's monitor returned a public IP (instead of a service IP) for all nodes. Thanks On 28/08/13 11:25, Javier Fontan wrote: Make sure that the name in OpenNebula is the same as the node name in ganglia. To check the name ganglia sees execute: $ telnet 8649 | grep "HOST NAME=" On Tue, Aug 27, 2013 at 12:33 PM, Andreas Calvo Gómez mailto:andreas.ca...@scytl.com>> wrote: Following the documentation [1] to set up Ganglia, I've end up seeing ONE variables in Ganglia, but not being pushed to ONE. What I've done so far under Ubuntu to set up Ganglia and ONE: - configure VM_MAD and IM_MAD in oned.conf, restart - install ganglia-monitor, ganglia-metadata and gmetad in ONE's frontend - install ganglia-monitor on all nodes - configure /remotes/vmm/kvm/poll_ganglia and /remotes/im/ganglia.d/ganglia_probe to reach ONE's frontend - set up the cron job to define OPENNEBULA_VMS_INFORMATION - add node with Ganglia as IM Nodes added with Ganglia as IM do not have information, although they are listed as ON and MONITORED. Am I missing some step? Thanks! [1] http://opennebula.org/documentation:rel4.2:ganglia ___ Users mailing list Users@lists.opennebula.org <mailto:Users@lists.opennebula.org> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Join us at OpenNebulaConf2013 in Berlin from the 24th to the 26th of September 2013! Javier Fontán Muiños Developer OpenNebula - The Open Source Toolkit for Data Center Virtualization www.OpenNebula.org <http://www.OpenNebula.org> | @OpenNebula | github.com/jfontan <http://github.com/jfontan> -- Andreas Calvo Gómez Systems Engineer Scytl Secure Electronic Voting Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona Phone: + 34 934 230 324 Fax: + 34 933 251 028 http://www.scytl.com NOTICE: The information in this e-mail and in any of its attachments is confidential and intended solely for the attention and use of the named addressee(s). If you are not the intended recipient, any disclosure, copying, distribution or retaining of this message or any part of it, without the prior written consent of Scytl Secure Electronic Voting, SA is prohibited and may be unlawful. If you have received this in error, please contact the sender and delete the material from any computer. ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
[one-users] Monitoring OpenNebula deployments with Ganglia
Following the documentation [1] to set up Ganglia, I've end up seeing ONE variables in Ganglia, but not being pushed to ONE. What I've done so far under Ubuntu to set up Ganglia and ONE: - configure VM_MAD and IM_MAD in oned.conf, restart - install ganglia-monitor, ganglia-metadata and gmetad in ONE's frontend - install ganglia-monitor on all nodes - configure /remotes/vmm/kvm/poll_ganglia and /remotes/im/ganglia.d/ganglia_probe to reach ONE's frontend - set up the cron job to define OPENNEBULA_VMS_INFORMATION - add node with Ganglia as IM Nodes added with Ganglia as IM do not have information, although they are listed as ON and MONITORED. Am I missing some step? Thanks! [1] http://opennebula.org/documentation:rel4.2:ganglia ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] default schedule action
Thanks Carlos, That should do the trick! On 27/08/13 11:26, Carlos Martín Sánchez wrote: Hi, For this kind of customization we provide Hooks [1]. I would create a VM hook on CREATE, with the $TEMPLATE argument. Your hook can be a small bash or ruby script, but any executable is fine. This script will need to check the uid/gid of the vm template, and call onevm shutdown --schedule now+24h. Let us know if you have any other specific question about hooks. Regards [1] http://opennebula.org/documentation:rel4.2:hooks [2] http://opennebula.org/documentation:rel4.2:vm_guide_2#scheduling_actions -- Join us at OpenNebulaConf2013 <http://opennebulaconf.com> in Berlin, 24-26 September, 2013 -- Carlos Martín, MSc Project Engineer OpenNebula - The Open-source Solution for Data Center Virtualization www.OpenNebula.org <http://www.OpenNebula.org> | cmar...@opennebula.org <mailto:cmar...@opennebula.org> | @OpenNebula <http://twitter.com/opennebula> On Mon, Aug 26, 2013 at 1:46 PM, Andreas Calvo Gómez mailto:andreas.ca...@scytl.com>> wrote: Hello all, Would it be possible to define a default schedule action based on user/group? For instance, imagine you want to delete a VM once it has been 24h running, would it be possible? Thanks ___ Users mailing list Users@lists.opennebula.org <mailto:Users@lists.opennebula.org> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
[one-users] default schedule action
Hello all, Would it be possible to define a default schedule action based on user/group? For instance, imagine you want to delete a VM once it has been 24h running, would it be possible? Thanks ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] opennebula upgrade to 3.8.3 minor errors
Daniel, On 06/02/13 14:22, Daniel Molina wrote: On 6 February 2013 12:36, Andreas Calvo Gómez wrote: Daniel, On 06/02/13 12:34, Daniel Molina wrote: On 6 February 2013 12:17, Andreas Calvo Gómez wrote: Hi Daniel! On 06/02/13 11:15, Daniel Molina wrote: Hi, On 1 February 2013 13:05, Andreas Calvo Gómez wrote: We have performed a update of both the OS (CentOS 6.2 to 6.3) and OpenNebula (3.8.1 to 3.8.3), and are facing some errors that weren't in the older release: - Logs can't be viewed from the webUI: selecting the "VM Log" tab shows an error message "Log for VM XXX not available". There was a bug in previous releases with the vm log directory, but it should be fixed in 3.8.3. Did you restart the sunstone-server after upgrading? Yes, several times because Sunstone or ONE becomes unresponsive from time to time. Is Sunstone running on the same machine as OpenNebula? Yes, we have a frontend with Sunstone and OpenNebula (using NFS to share the common storage). The new vms logs should be in: system-wide: LOG_LOCATION + "/vms/#{id}.log" self-contained: LOG_LOCATION + "/vms/#{id}/vm.log" the former directory was system-wide: LOG_LOCATION + "/#{id}.log" self-contained: LOG_LOCATION + "/#{id}/vm.log" The logs from vms created in old versions have to be moved/linked to this new directory. I will update the upgrade guide with this step, sorry for the inconvenience.. However, logs for new VMs are created under /var/log/one, which I thought is where ONE is looking to show them. - One node can't establish network connections: started VMs on one specific node don't have network connectivity, while they get the IP address by context. All other nodes work fine. Please check the following link: http://wiki.opennebula.org/faq#my_vm_is_running_but_i_get_no_answer_from_pings_what_s_wrong Maybe there is something wrong with the host configuration. I'll try that link, but all nodes performed the same update procedure and only one is not working. - Sunstone/ONE seems to be stucked from time to time when performing large operations: as the old version (3.8.1), Sunstone becomes unresponsive and it is necessary to shut down all applications (one, sunstone, occi) and start them again. It usually happens when performing an operation with a large set of VMs. For all errors, no log in /var/log/one give any hint. Is there any other way to try to narrow down the root source of this problems? What are exactly the actions you performed and with how many vms? Could you send us your sched.conf and oned.conf files? When a batch command is executed (usually deleting 50+ VMs), Sunstone or ONE becomes unresponsive and must be restarted. Yes, attached you will find our config files. Can you interact with OpenNebula through the cli, or is Sunstone the only one that becomes unresponsive. Everything becomes unresponsive, but not at the same time. When a user performs a large batch operation, the same user can't run more commands, but other users are fine for a while. After some time, users can't log into Sunstone and other services (such as OCCI) become unresponsive too. Could you send us the oned.log part when this operation is performed I've attached a compressed file with all logs (besides VM related ones) performing the batch operation. Hope it's usefull. Cheers -- Andreas Calvo Gómez Systems Engineer Scytl Secure Electronic Voting Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona Phone: + 34 934 230 324 Fax: + 34 933 251 028 http://www.scytl.com NOTICE: The information in this e-mail and in any of its attachments is confidential and intended solely for the attention and use of the named addressee(s). If you are not the intended recipient, any disclosure, copying, distribution or retaining of this message or any part of it, without the prior written consent of Scytl Secure Electronic Voting, SA is prohibited and may be unlawful. If you have received this in error, please contact the sender and delete the material from any computer. one_logs.tar.gz Description: GNU Zip compressed data ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] opennebula upgrade to 3.8.3 minor errors
Daniel, On 06/02/13 12:34, Daniel Molina wrote: On 6 February 2013 12:17, Andreas Calvo Gómez wrote: Hi Daniel! On 06/02/13 11:15, Daniel Molina wrote: Hi, On 1 February 2013 13:05, Andreas Calvo Gómez wrote: We have performed a update of both the OS (CentOS 6.2 to 6.3) and OpenNebula (3.8.1 to 3.8.3), and are facing some errors that weren't in the older release: - Logs can't be viewed from the webUI: selecting the "VM Log" tab shows an error message "Log for VM XXX not available". There was a bug in previous releases with the vm log directory, but it should be fixed in 3.8.3. Did you restart the sunstone-server after upgrading? Yes, several times because Sunstone or ONE becomes unresponsive from time to time. Is Sunstone running on the same machine as OpenNebula? Yes, we have a frontend with Sunstone and OpenNebula (using NFS to share the common storage). - One node can't establish network connections: started VMs on one specific node don't have network connectivity, while they get the IP address by context. All other nodes work fine. Please check the following link: http://wiki.opennebula.org/faq#my_vm_is_running_but_i_get_no_answer_from_pings_what_s_wrong Maybe there is something wrong with the host configuration. I'll try that link, but all nodes performed the same update procedure and only one is not working. - Sunstone/ONE seems to be stucked from time to time when performing large operations: as the old version (3.8.1), Sunstone becomes unresponsive and it is necessary to shut down all applications (one, sunstone, occi) and start them again. It usually happens when performing an operation with a large set of VMs. For all errors, no log in /var/log/one give any hint. Is there any other way to try to narrow down the root source of this problems? What are exactly the actions you performed and with how many vms? Could you send us your sched.conf and oned.conf files? When a batch command is executed (usually deleting 50+ VMs), Sunstone or ONE becomes unresponsive and must be restarted. Yes, attached you will find our config files. Can you interact with OpenNebula through the cli, or is Sunstone the only one that becomes unresponsive. Everything becomes unresponsive, but not at the same time. When a user performs a large batch operation, the same user can't run more commands, but other users are fine for a while. After some time, users can't log into Sunstone and other services (such as OCCI) become unresponsive too. -- Andreas Calvo Gómez Systems Engineer Scytl Secure Electronic Voting Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona Phone: + 34 934 230 324 Fax: + 34 933 251 028 http://www.scytl.com NOTICE: The information in this e-mail and in any of its attachments is confidential and intended solely for the attention and use of the named addressee(s). If you are not the intended recipient, any disclosure, copying, distribution or retaining of this message or any part of it, without the prior written consent of Scytl Secure Electronic Voting, SA is prohibited and may be unlawful. If you have received this in error, please contact the sender and delete the material from any computer. ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] opennebula upgrade to 3.8.3 minor errors
Hi Daniel! On 06/02/13 11:15, Daniel Molina wrote: Hi, On 1 February 2013 13:05, Andreas Calvo Gómez wrote: We have performed a update of both the OS (CentOS 6.2 to 6.3) and OpenNebula (3.8.1 to 3.8.3), and are facing some errors that weren't in the older release: - Logs can't be viewed from the webUI: selecting the "VM Log" tab shows an error message "Log for VM XXX not available". There was a bug in previous releases with the vm log directory, but it should be fixed in 3.8.3. Did you restart the sunstone-server after upgrading? Yes, several times because Sunstone or ONE becomes unresponsive from time to time. - One node can't establish network connections: started VMs on one specific node don't have network connectivity, while they get the IP address by context. All other nodes work fine. Please check the following link: http://wiki.opennebula.org/faq#my_vm_is_running_but_i_get_no_answer_from_pings_what_s_wrong Maybe there is something wrong with the host configuration. I'll try that link, but all nodes performed the same update procedure and only one is not working. - Sunstone/ONE seems to be stucked from time to time when performing large operations: as the old version (3.8.1), Sunstone becomes unresponsive and it is necessary to shut down all applications (one, sunstone, occi) and start them again. It usually happens when performing an operation with a large set of VMs. For all errors, no log in /var/log/one give any hint. Is there any other way to try to narrow down the root source of this problems? What are exactly the actions you performed and with how many vms? Could you send us your sched.conf and oned.conf files? When a batch command is executed (usually deleting 50+ VMs), Sunstone or ONE becomes unresponsive and must be restarted. Yes, attached you will find our config files. Cheers -- Andreas Calvo Gómez Systems Engineer Scytl Secure Electronic Voting Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona Phone: + 34 934 230 324 Fax: + 34 933 251 028 http://www.scytl.com NOTICE: The information in this e-mail and in any of its attachments is confidential and intended solely for the attention and use of the named addressee(s). If you are not the intended recipient, any disclosure, copying, distribution or retaining of this message or any part of it, without the prior written consent of Scytl Secure Electronic Voting, SA is prohibited and may be unlawful. If you have received this in error, please contact the sender and delete the material from any computer. #*** # OpenNebula Configuration file #*** #*** # Daemon configuration attributes #--- # MANAGER_TIMER: Time in seconds the core uses to evaluate periodical functions. # HOST_MONITORING_INTERVAL and VM_POLLING_INTERVAL can not have smaller values # than MANAGER_TIMER. # # HOST_MONITORING_INTERVAL: Time in seconds between host monitorization. # HOST_PER_INTERVAL: Number of hosts monitored in each interval. # HOST_MONITORING_EXPIRATION_TIME: Time, in seconds, to expire monitoring # information. Use 0 to disable HOST monitoring recording. # # VM_POLLING_INTERVAL: Time in seconds between virtual machine monitorization. # Use 0 to disable VM monitoring. # VM_PER_INTERVAL: Number of VMs monitored in each interval. # VM_MONITORING_EXPIRATION_TIME: Time, in seconds, to expire monitoring # information. Use 0 to disable VM monitoring recording. # # SCRIPTS_REMOTE_DIR: Remote path to store the monitoring and VM management # scripts. # # PORT: Port where oned will listen for xmlrpc calls. # # DB: Configuration attributes for the database backend # backend : can be sqlite or mysql (default is sqlite) # server : (mysql) host name or an IP address for the MySQL server # port: (mysql) port for the connection to the server. # If set to 0, the default port is used. # user: (mysql) user's MySQL login ID # passwd : (mysql) the password for user # db_name : (mysql) the database name # # VNC_BASE_PORT: VNC ports for VMs can be automatically set to VNC_BASE_PORT + # VMID # # DEBUG_LEVEL: 0 = ERROR, 1 = WARNING, 2 = INFO, 3 = DEBUG #*** #MANAGER_TIMER = 30 HOST_MONITORING_INTERVAL = 600 #HOST_PER_INTERVAL = 15 #HOST_MONITORING_EXPIRATION_TIME = 86400 VM_POLLING_INTERVAL= 600 #VM_PER_INTERVAL = 5 #VM_MONITORING_EXPIRATION_TIME = 86400 SCRIPTS_REMOTE_DIR=/var/tmp/one PORT = 2633 DB = [ backend = "sqlite" ] # Sample configuration for MySQL # DB = [ backend = &
[one-users] opennebula upgrade to 3.8.3 minor errors
We have performed a update of both the OS (CentOS 6.2 to 6.3) and OpenNebula (3.8.1 to 3.8.3), and are facing some errors that weren't in the older release: - Logs can't be viewed from the webUI: selecting the "VM Log" tab shows an error message "Log for VM XXX not available". - One node can't establish network connections: started VMs on one specific node don't have network connectivity, while they get the IP address by context. All other nodes work fine. - Sunstone/ONE seems to be stucked from time to time when performing large operations: as the old version (3.8.1), Sunstone becomes unresponsive and it is necessary to shut down all applications (one, sunstone, occi) and start them again. It usually happens when performing an operation with a large set of VMs. For all errors, no log in /var/log/one give any hint. Is there any other way to try to narrow down the root source of this problems? Thanks -- Andreas Calvo Gómez Systems Engineer Scytl Secure Electronic Voting Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona Phone: + 34 934 230 324 Fax: + 34 933 251 028 http://www.scytl.com NOTICE: The information in this e-mail and in any of its attachments is confidential and intended solely for the attention and use of the named addressee(s). If you are not the intended recipient, any disclosure, copying, distribution or retaining of this message or any part of it, without the prior written consent of Scytl Secure Electronic Voting, SA is prohibited and may be unlawful. If you have received this in error, please contact the sender and delete the material from any computer. ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Change one.db path [was ONE sunstone locking randomly]
Hello, Is there any way to place the one.db sqlite file in another directory? (so it is outside the NFS shared path) Thanks On 14/01/13 15:32, Andreas Calvo Gómez wrote: We will try it under our maintenance schedule. However, stopping and starting daemons shows the following warnings: [root@opennebula ~]# /etc/init.d/opennebula-occi stop && /etc/init.d/opennebula-sunstone stop && /etc/init.d/opennebula stop && /etc/init.d/httpd stop Stopping OCCI Server daemon: occi-server stopped [ OK ] Stopping Sunstone Server daemon: sunstone-server stopped [ OK ] Stopping OpenNebula daemon: oned and scheduler stopped [ OK ] Stopping httpd:[ OK ] [root@opennebula ~]# service opennebula start && service opennebula-sunstone start && service opennebula-occi start && service httpd start Starting OpenNebula daemon: Stale .lock detected. Erasing it. [ OK ] Starting Sunstone Server daemon: sunstone-server started [ OK ] Starting OCCI Server: Stale .lock detected. Erasing it. occi-server started [ OK ] Starting httpd: httpd: Could not reliably determine the server's fully qualified domain name, using opennebula.scytl.net for ServerName [ OK ] Any hint about those lock files detected? Thanks On 11/01/13 14:51, Carlos Martín Sánchez wrote: Hi, I guess you are referring to the old documentation, right? I found the warning about sqlite and NFS in the 3.2 archives [1]. It should be enough if you move the one.db file and create a symlink. Regards [1] http://opennebula.org/documentation:archives:rel3.2:sfs#considerations_limitations -- Carlos Martín, MSc Project Engineer OpenNebula - The Open-source Solution for Data Center Virtualization www.OpenNebula.org <http://www.OpenNebula.org> | cmar...@opennebula.org <mailto:cmar...@opennebula.org> | @OpenNebula <http://twitter.com/opennebula> On Fri, Jan 11, 2013 at 2:30 PM, Andreas Calvo Gómez mailto:andreas.ca...@scytl.com>> wrote: Carlos, We are exporting the whole /var/lib/one directory as per the documentation, which helps to maintain all the cloud by sharing the same ssh public keys and configuration. Do you think placing the SQLite database in a non-NFS directory will make things better? On 11/01/13 14:09, Carlos Martín Sánchez wrote: Hi, Are you exporting only the shared datastores, or the whole /var/lib/one location? In older versions we advised to share /var/lib/one to make the ssh configuration easier, but NFS would make the sqlite DB misbehave. Regards -- Carlos Martín, MSc Project Engineer OpenNebula - The Open-source Solution for Data Center Virtualization www.OpenNebula.org <http://www.OpenNebula.org> | cmar...@opennebula.org <mailto:cmar...@opennebula.org> | @OpenNebula <http://twitter.com/opennebula> On Thu, Jan 10, 2013 at 11:45 AM, Andreas Calvo Gómez mailto:andreas.ca...@scytl.com>> wrote: Hello, We've been using ONE for some time now, and it worked flawlessly. A month ago, we switched from GFS to NFS to shared storage, because GFS is really difficult to maintain and operate. We found out that Sunstone is becoming inactive randomly: while the WEBUI seems to be working, users cannot log in and logged users cannot operate (launch VNC, start/stop VMs, ...). However, there are no references to an error or lockup either in sunstone logs (both .log and .error) or in ONE logs. Any hint or point to track down this behaviour? ONE version is 3.8.1 under CentOS 6.3. Thanks -- Andreas Calvo Gómez Systems Engineer Scytl Secure Electronic Voting Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona Phone: + 34 934 230 324 Fax: + 34 933 251 028 http://www.scytl.com NOTICE: The information in this e-mail and in any of its attachments is confidential and intended solely for the attention and use of the named addressee(s). If you are not the intended recipient, any disclosure, copying, distribution or retaining of this message or any part of it, without the prior written consent of Scytl Secure Electronic Voting, SA is prohibited and may be unlawful. If you have received
Re: [one-users] ONE sunstone locking randomly
We will try it under our maintenance schedule. However, stopping and starting daemons shows the following warnings: [root@opennebula ~]# /etc/init.d/opennebula-occi stop && /etc/init.d/opennebula-sunstone stop && /etc/init.d/opennebula stop && /etc/init.d/httpd stop Stopping OCCI Server daemon: occi-server stopped [ OK ] Stopping Sunstone Server daemon: sunstone-server stopped [ OK ] Stopping OpenNebula daemon: oned and scheduler stopped [ OK ] Stopping httpd:[ OK ] [root@opennebula ~]# service opennebula start && service opennebula-sunstone start && service opennebula-occi start && service httpd start Starting OpenNebula daemon: Stale .lock detected. Erasing it. [ OK ] Starting Sunstone Server daemon: sunstone-server started [ OK ] Starting OCCI Server: Stale .lock detected. Erasing it. occi-server started [ OK ] Starting httpd: httpd: Could not reliably determine the server's fully qualified domain name, using opennebula.scytl.net for ServerName [ OK ] Any hint about those lock files detected? Thanks On 11/01/13 14:51, Carlos Martín Sánchez wrote: Hi, I guess you are referring to the old documentation, right? I found the warning about sqlite and NFS in the 3.2 archives [1]. It should be enough if you move the one.db file and create a symlink. Regards [1] http://opennebula.org/documentation:archives:rel3.2:sfs#considerations_limitations -- Carlos Martín, MSc Project Engineer OpenNebula - The Open-source Solution for Data Center Virtualization www.OpenNebula.org <http://www.OpenNebula.org> | cmar...@opennebula.org <mailto:cmar...@opennebula.org> | @OpenNebula <http://twitter.com/opennebula> On Fri, Jan 11, 2013 at 2:30 PM, Andreas Calvo Gómez mailto:andreas.ca...@scytl.com>> wrote: Carlos, We are exporting the whole /var/lib/one directory as per the documentation, which helps to maintain all the cloud by sharing the same ssh public keys and configuration. Do you think placing the SQLite database in a non-NFS directory will make things better? On 11/01/13 14:09, Carlos Martín Sánchez wrote: Hi, Are you exporting only the shared datastores, or the whole /var/lib/one location? In older versions we advised to share /var/lib/one to make the ssh configuration easier, but NFS would make the sqlite DB misbehave. Regards -- Carlos Martín, MSc Project Engineer OpenNebula - The Open-source Solution for Data Center Virtualization www.OpenNebula.org <http://www.OpenNebula.org> | cmar...@opennebula.org <mailto:cmar...@opennebula.org> | @OpenNebula <http://twitter.com/opennebula> On Thu, Jan 10, 2013 at 11:45 AM, Andreas Calvo Gómez mailto:andreas.ca...@scytl.com>> wrote: Hello, We've been using ONE for some time now, and it worked flawlessly. A month ago, we switched from GFS to NFS to shared storage, because GFS is really difficult to maintain and operate. We found out that Sunstone is becoming inactive randomly: while the WEBUI seems to be working, users cannot log in and logged users cannot operate (launch VNC, start/stop VMs, ...). However, there are no references to an error or lockup either in sunstone logs (both .log and .error) or in ONE logs. Any hint or point to track down this behaviour? ONE version is 3.8.1 under CentOS 6.3. Thanks -- Andreas Calvo Gómez Systems Engineer Scytl Secure Electronic Voting Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona Phone: + 34 934 230 324 Fax: + 34 933 251 028 http://www.scytl.com NOTICE: The information in this e-mail and in any of its attachments is confidential and intended solely for the attention and use of the named addressee(s). If you are not the intended recipient, any disclosure, copying, distribution or retaining of this message or any part of it, without the prior written consent of Scytl Secure Electronic Voting, SA is prohibited and may be unlawful. If you have received this in error, please contact the sender and delete the material from any computer. ___ Users mailing list Users@lists
Re: [one-users] ONE sunstone locking randomly
Carlos, We are exporting the whole /var/lib/one directory as per the documentation, which helps to maintain all the cloud by sharing the same ssh public keys and configuration. Do you think placing the SQLite database in a non-NFS directory will make things better? On 11/01/13 14:09, Carlos Martín Sánchez wrote: Hi, Are you exporting only the shared datastores, or the whole /var/lib/one location? In older versions we advised to share /var/lib/one to make the ssh configuration easier, but NFS would make the sqlite DB misbehave. Regards -- Carlos Martín, MSc Project Engineer OpenNebula - The Open-source Solution for Data Center Virtualization www.OpenNebula.org <http://www.OpenNebula.org> | cmar...@opennebula.org <mailto:cmar...@opennebula.org> | @OpenNebula <http://twitter.com/opennebula> On Thu, Jan 10, 2013 at 11:45 AM, Andreas Calvo Gómez mailto:andreas.ca...@scytl.com>> wrote: Hello, We've been using ONE for some time now, and it worked flawlessly. A month ago, we switched from GFS to NFS to shared storage, because GFS is really difficult to maintain and operate. We found out that Sunstone is becoming inactive randomly: while the WEBUI seems to be working, users cannot log in and logged users cannot operate (launch VNC, start/stop VMs, ...). However, there are no references to an error or lockup either in sunstone logs (both .log and .error) or in ONE logs. Any hint or point to track down this behaviour? ONE version is 3.8.1 under CentOS 6.3. Thanks -- Andreas Calvo Gómez Systems Engineer Scytl Secure Electronic Voting Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona Phone: + 34 934 230 324 Fax: + 34 933 251 028 http://www.scytl.com NOTICE: The information in this e-mail and in any of its attachments is confidential and intended solely for the attention and use of the named addressee(s). If you are not the intended recipient, any disclosure, copying, distribution or retaining of this message or any part of it, without the prior written consent of Scytl Secure Electronic Voting, SA is prohibited and may be unlawful. If you have received this in error, please contact the sender and delete the material from any computer. ___ Users mailing list Users@lists.opennebula.org <mailto:Users@lists.opennebula.org> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Andreas Calvo Gómez Systems Engineer Scytl Secure Electronic Voting Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona Phone: + 34 934 230 324 Fax: + 34 933 251 028 http://www.scytl.com NOTICE: The information in this e-mail and in any of its attachments is confidential and intended solely for the attention and use of the named addressee(s). If you are not the intended recipient, any disclosure, copying, distribution or retaining of this message or any part of it, without the prior written consent of Scytl Secure Electronic Voting, SA is prohibited and may be unlawful. If you have received this in error, please contact the sender and delete the material from any computer. ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
[one-users] ONE sunstone locking randomly
Hello, We've been using ONE for some time now, and it worked flawlessly. A month ago, we switched from GFS to NFS to shared storage, because GFS is really difficult to maintain and operate. We found out that Sunstone is becoming inactive randomly: while the WEBUI seems to be working, users cannot log in and logged users cannot operate (launch VNC, start/stop VMs, ...). However, there are no references to an error or lockup either in sunstone logs (both .log and .error) or in ONE logs. Any hint or point to track down this behaviour? ONE version is 3.8.1 under CentOS 6.3. Thanks -- Andreas Calvo Gómez Systems Engineer Scytl Secure Electronic Voting Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona Phone: + 34 934 230 324 Fax: + 34 933 251 028 http://www.scytl.com NOTICE: The information in this e-mail and in any of its attachments is confidential and intended solely for the attention and use of the named addressee(s). If you are not the intended recipient, any disclosure, copying, distribution or retaining of this message or any part of it, without the prior written consent of Scytl Secure Electronic Voting, SA is prohibited and may be unlawful. If you have received this in error, please contact the sender and delete the material from any computer. ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
[one-users] Default privileges on objects
By default, most objects are created with use, manage only for the user. Is there any way to define another set of default privileges? Thanks -- Andreas Calvo Gómez Systems Engineer Scytl Secure Electronic Voting Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona Phone: + 34 934 230 324 Fax: + 34 933 251 028 http://www.scytl.com NOTICE: The information in this e-mail and in any of its attachments is confidential and intended solely for the attention and use of the named addressee(s). If you are not the intended recipient, any disclosure, copying, distribution or retaining of this message or any part of it, without the prior written consent of Scytl Secure Electronic Voting, SA is prohibited and may be unlawful. If you have received this in error, please contact the sender and delete the material from any computer. ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] help: with 8021q driver, network configuration won't work, after host reboot (???)
In such case you must know beforehand all VLANs, set them up and create the necessary bridges. Be careful to put the same configuration on OpenNebula and in the host, as it may lead to having a VMs tied to a different bridge. On 03/12/12 03:28, 于长江 wrote: Thank you for your answer, I know this configuration, but my physical host is reboot, and i want to restart the virtual machine manually instead of by Opennebula. I mean those interfaces are not up when the physical host starts. I almost know why Opennebula do this, because she want all operations to vm is taken by her not by us. 于长江 | 东软 *From:* Andreas Calvo Gómez <mailto:andreas.ca...@scytl.com> *Date:* 2012-12-01 00:11 *To:* 于长江 <mailto:yu...@neusoft.com> *CC:* users <mailto:users@lists.opennebula.org> *Subject:* Re: help: with 8021q driver, network configuration won't work, after host reboot (???) Opennebula recreates all necessary interfaces on demand. It means that, once a virtual network uses .1Q, it will create the bridge to route the traffig and map the VLAN on the physical interface. Dependenig on your distribution, you may need to specify the path of the bridge control binary (brctl). On Red Hat derivative, it is located on /usr/sbin/brctl; whereas the default location should be /sbin/brctl. You can either tweak the network scripts to look for the correct path or make a link. On 30/11/12 13:58, 于长江 wrote: 1. First, I create a vm on computer14, i saw network interfaces below: [oneadmin@computer14 ~]$ ifconfig eth0 Link encap:Ethernet HWaddr E8:39:35:18:A1:60 inet addr:10.72.24.216 Bcast:10.72.24.255 Mask:255.255.255.0 inet6 addr: fe80::ea39:35ff:fe18:a160/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:21587155 errors:0 dropped:0 overruns:0 frame:0 TX packets:8407735 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:22927887802 (21.3 GiB) TX bytes:667899749 (636.9 MiB) Memory:fbe6-fbe8 eth0.100 Link encap:Ethernet HWaddr E8:39:35:18:A1:60 inet6 addr: fe80::ea39:35ff:fe18:a160/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:16672 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:784568 (766.1 KiB) onebr100 Link encap:Ethernet HWaddr E8:39:35:18:A1:60 inet6 addr: fe80::acb0:e4ff:fe95:6f08/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:13746 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:398936 (389.5 KiB) TX bytes:468 (468.0 b) vnet0 Link encap:Ethernet HWaddr FE:00:DD:67:22:02 inet6 addr: fe80::fc00:ddff:fe67:2202/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:10 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:0 (0.0 b) TX bytes:732 (732.0 b) 2. Then, i reboot computer14 [oneadmin@computer14 ~]$ sudo shutdown -r now 3. After this operation,i saw vconfig network interface has gone (eth0.100, onebr100) [oneadmin@computer14 ~]# ifconfig eth0 Link encap:Ethernet HWaddr E8:39:35:18:A1:60 inet addr:10.72.24.216 Bcast:10.72.24.255 Mask:255.255.255.0 inet6 addr: fe80::ea39:35ff:fe18:a160/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:122 errors:0 dropped:0 overruns:0 frame:0 TX packets:102 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:13317 (13.0 KiB) TX bytes:12829 (12.5 KiB) Memory:fbe6-fbe8 4. When I create that vm, error occurred below: [root@computer14 images]# pwd /one_images/25/images [root@computer14 images]# ls deployment.0 disk.0 disk.1 [root@computer14 images]# virsh create deployment.0 error: Failed to create domain from deployment.0 error: Failed to add tap interface to bridge 'onebr100': No such device -------- 于长江 | 东软 *From:* Andreas Calvo Gómez <mailto:andreas.ca...@scytl.com> *Date:* 2012-11-30 20:33 *To:* yu...@neusoft.com <mailto:yu...@neusoft.com> *CC:* users@lists.opennebula.org <mailto:users@lists.opennebula.org> *Subject:* Re: help: with 8021q driver, network configuration won't work, after host reboot (???) We've been using 802.1Q under CentOS 6 without problems for quite a long time. Can you post a detailed description of what is happening? Thanks On 30/11/12 07:57, users-r
Re: [one-users] help: with 8021q driver, network configuration won't work, after host reboot (???)
Opennebula recreates all necessary interfaces on demand. It means that, once a virtual network uses .1Q, it will create the bridge to route the traffig and map the VLAN on the physical interface. Dependenig on your distribution, you may need to specify the path of the bridge control binary (brctl). On Red Hat derivative, it is located on /usr/sbin/brctl; whereas the default location should be /sbin/brctl. You can either tweak the network scripts to look for the correct path or make a link. On 30/11/12 13:58, 于长江 wrote: 1. First, I create a vm on computer14, i saw network interfaces below: [oneadmin@computer14 ~]$ ifconfig eth0 Link encap:Ethernet HWaddr E8:39:35:18:A1:60 inet addr:10.72.24.216 Bcast:10.72.24.255 Mask:255.255.255.0 inet6 addr: fe80::ea39:35ff:fe18:a160/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:21587155 errors:0 dropped:0 overruns:0 frame:0 TX packets:8407735 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:22927887802 (21.3 GiB) TX bytes:667899749 (636.9 MiB) Memory:fbe6-fbe8 eth0.100 Link encap:Ethernet HWaddr E8:39:35:18:A1:60 inet6 addr: fe80::ea39:35ff:fe18:a160/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:16672 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:784568 (766.1 KiB) onebr100 Link encap:Ethernet HWaddr E8:39:35:18:A1:60 inet6 addr: fe80::acb0:e4ff:fe95:6f08/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:13746 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:398936 (389.5 KiB) TX bytes:468 (468.0 b) vnet0 Link encap:Ethernet HWaddr FE:00:DD:67:22:02 inet6 addr: fe80::fc00:ddff:fe67:2202/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:10 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:0 (0.0 b) TX bytes:732 (732.0 b) 2. Then, i reboot computer14 [oneadmin@computer14 ~]$ sudo shutdown -r now 3. After this operation,i saw vconfig network interface has gone (eth0.100, onebr100) [oneadmin@computer14 ~]# ifconfig eth0 Link encap:Ethernet HWaddr E8:39:35:18:A1:60 inet addr:10.72.24.216 Bcast:10.72.24.255 Mask:255.255.255.0 inet6 addr: fe80::ea39:35ff:fe18:a160/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:122 errors:0 dropped:0 overruns:0 frame:0 TX packets:102 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:13317 (13.0 KiB) TX bytes:12829 (12.5 KiB) Memory:fbe6-fbe8 4. When I create that vm, error occurred below: [root@computer14 images]# pwd /one_images/25/images [root@computer14 images]# ls deployment.0 disk.0 disk.1 [root@computer14 images]# virsh create deployment.0 error: Failed to create domain from deployment.0 error: Failed to add tap interface to bridge 'onebr100': No such device 于长江 | 东软 *From:* Andreas Calvo Gómez <mailto:andreas.ca...@scytl.com> *Date:* 2012-11-30 20:33 *To:* yu...@neusoft.com <mailto:yu...@neusoft.com> *CC:* users@lists.opennebula.org <mailto:users@lists.opennebula.org> *Subject:* Re: help: with 8021q driver, network configuration won't work, after host reboot (???) We've been using 802.1Q under CentOS 6 without problems for quite a long time. Can you post a detailed description of what is happening? Thanks On 30/11/12 07:57, users-requ...@lists.opennebula.org wrote: > > Message: 2 > Date: Fri, 30 Nov 2012 14:56:47 +0800 > From: ??? > To: users > Subject: [one-users] help: with 8021q driver, network configuration > won't work after host reboot > Message-ID: <201211301456470519...@neusoft.com> > Content-Type: text/plain; charset="gb2312" > > As i said, with 8021q driver, network configuration won't work after host reboot. Does One notice this? > > Here is my solution for rhel below: (Is this ok?) > # cat $ONE_LOCATION/var/remotes/vnm/802.1Q/HostManaged.rb > require 'OpenNebulaNetwork' > ... > def create_bridge(bridge) > OpenNebula.exec_and_log_no_exit("#{COMMANDS[:brctl]} addbr #{bridge}") > OpenNebula.exec_and_log_no_exit("sudo sh -c 'echo brctl addbr #{bridge} >> /etc/rc.d/rc.local'") > end > ... > def create_dev_vlan(dev, vlan) >
Re: [one-users] help: with 8021q driver, network configuration won't work, after host reboot (???)
We've been using 802.1Q under CentOS 6 without problems for quite a long time. Can you post a detailed description of what is happening? Thanks On 30/11/12 07:57, users-requ...@lists.opennebula.org wrote: Message: 2 Date: Fri, 30 Nov 2012 14:56:47 +0800 From: ??? To: users Subject: [one-users] help: with 8021q driver, network configuration won't work after host reboot Message-ID: <201211301456470519...@neusoft.com> Content-Type: text/plain; charset="gb2312" As i said, with 8021q driver, network configuration won't work after host reboot. Does One notice this? Here is my solution for rhel below: (Is this ok?) # cat $ONE_LOCATION/var/remotes/vnm/802.1Q/HostManaged.rb require 'OpenNebulaNetwork' ... def create_bridge(bridge) OpenNebula.exec_and_log_no_exit("#{COMMANDS[:brctl]} addbr #{bridge}") OpenNebula.exec_and_log_no_exit("sudo sh -c 'echo brctl addbr #{bridge} >> /etc/rc.d/rc.local'") end ... def create_dev_vlan(dev, vlan) OpenNebula.exec_and_log("#{COMMANDS[:vconfig]} add #{dev} #{vlan}") OpenNebula.exec_and_log("sudo sh -c 'echo vconfig add #{dev} #{vlan} >> /etc/rc.d/rc.local'") end ... def attach_brigde_dev(bridge, dev, vlan=nil) dev = "#{dev}.#{vlan}" if vlan OpenNebula.exec_and_log("#{COMMANDS[:brctl]} addif #{bridge} #{dev}") OpenNebula.exec_and_log("sudo sh -c 'echo brctl addif #{bridge} #{dev} >> /etc/rc.d/rc.local'") end ... def ifup(dev, vlan=nil) dev = "#{dev}.#{vlan}" if vlan OpenNebula.exec_and_log("#{COMMANDS[:ip]} link set #{dev} up") OpenNebula.exec_and_log("sudo sh -c 'echo ip link set #{dev} up >> /etc/rc.d/rc.local'") end ... ??? | ?? --- Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) is intended only for the use of the intended recipient and may be confidential and/or privileged of Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying is strictly prohibited, and may be unlawful.If you have received this communication in error,please immediately notify the sender by return e-mail, and delete the original message and all copies from your system. Thank you. --- -- Andreas Calvo Gómez Systems Engineer Scytl Secure Electronic Voting Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona Phone: + 34 934 230 324 Fax: + 34 933 251 028 http://www.scytl.com NOTICE: The information in this e-mail and in any of its attachments is confidential and intended solely for the attention and use of the named addressee(s). If you are not the intended recipient, any disclosure, copying, distribution or retaining of this message or any part of it, without the prior written consent of Scytl Secure Electronic Voting, SA is prohibited and may be unlawful. If you have received this in error, please contact the sender and delete the material from any computer. ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
[one-users] Granted operations with ACLs
Hi, Following the example in the documentation for managing ACLs ( http://opennebula.org/documentation:rel3.8:manage_acl), how should be implemented if we have two groups of users which should be able to execute VMs, but one set of users (say, administrators), which should be able to create any resource and assign it to the corresponding group? Imagine a scenario where we have to groups, A and B; and a group of admins, Admin. Groups A and B should only be able to execute resources on their own group. Admin group should be able to create resources and assign them to a specific group. The only thing that is missing is the chgrp/chown commands under the sunstone webui, which is the reference tool to use. -- Andreas Calvo Gómez Systems Engineer Scytl Secure Electronic Voting Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona Phone: + 34 934 230 324 Fax: + 34 933 251 028 http://www.scytl.com NOTICE: The information in this e-mail and in any of its attachments is confidential and intended solely for the attention and use of the named addressee(s). If you are not the intended recipient, any disclosure, copying, distribution or retaining of this message or any part of it, without the prior written consent of Scytl Secure Electronic Voting, SA is prohibited and may be unlawful. If you have received this in error, please contact the sender and delete the material from any computer. ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Using shared fs and ssh TM
Javier, Stopping and resuming a machine worked, which is really what we were looking for. I must say having datastores really simplifies all management -- once you get used to it. Thanks El 30/07/12 14:37, Javier Fontan escribió: As a side note. This precise configuration was not tested so expect dragons. The drivers may need to be tweaked but is a good starting point. On Mon, Jul 30, 2012 at 2:35 PM, Javier Fontan wrote: Then, if the system will be local you have to change the tm driver from shared to ssh. Use "onedatastore update" to do so. http://opennebula.org/documentation:rel3.6:system_ds#using_the_ssh_transfer_driver On Mon, Jul 30, 2012 at 2:13 PM, Andreas Calvo wrote: No, this directory is linked to be only at the node's local filesystem -- as it is the one that holds QCOW2 deltas. Only the image datastore (/var/lib/one//datastores/1/) is shared. Main problem is R/W access to the shared filesystem, and that's why we tried to hold deltas at node's local filesystem -- by linking the system datastore to a local path on the node. So far so good, except it cannot do any migration nor save. El 30/07/12 13:50, Javier Fontan escribió: The checkpoint should be located at /var/lib/one//datastores/0/5366 in both the node and the frontend. That is the system datastore. As the driver is shared the tm is not copying the file as it is supposed to be shared using NFS or a similar shared filesystem. Is /var/lib/one//datastores/0 shared between the frontend and the nodes? On Mon, Jul 30, 2012 at 12:35 PM, Andreas Calvo wrote: Javier, Where should the checkpoint be located? I've checked in /var/lib/one/$VMID, but no checkpoint found. Only 4 files are located under this path: context.sh deployment.0 transfer.0.prolog transfer.0.stop transfer.0.stop show: MV qcow2 cloud02:/var/lib/one//datastores/0/5366/disk.0 opennebula:/var/lib/one/datastores/0/5366/disk.0 5366 1 MV shared cloud02:/var/lib/one//datastores/0/5366 opennebula:/var/lib/one/datastores/0/5366 5366 0 On the node which ran the VM, there is a directory /var/lib/one//datastores/0/$VMID with a checkpoint file in it, but I can't find it in the frontend. El 27/07/12 18:02, Javier Fontan escribió: Somehow the checkpoint did not get copied. Can you try to stop a VM and check if the checkpoint is transfered back to the frontend? Just to see if the tm is working correctly. On Fri, Jul 27, 2012 at 11:49 AM, Andreas Calvo wrote: No, System datastore (where all QCOW2 delta are stored) is not shared, it relays on the node local filesystem. However, datastore 1 (where all images are stored) it is stored and shared on all nodes under the same path. To use the same directory logic, the paths have been linked. In this case: /var/lib/one is shared on all nodes /var/lib/one/datastores/0 is linked to /one/datastores/0, which is local /one/datastores/1 is linked to /var/lib/one/datastores/1 (which is shared) If I'm not wrong, the system datastores holds the incremental changes, whereas the other datastores hold images. El 27/07/12 11:22, Javier Fontan escribió: Is your system datastore (0) shared and mounted in all your nodes? On Wed, Jul 25, 2012 at 3:56 PM, Andreas Calvo wrote: Hello again, I've tried to reuse the SSH MV script, but it fails. Output is: Wed Jul 25 15:51:59 2012 [VMM][I]: Command execution fail: /var/tmp/one/vmm/kvm/restore /var/lib/one//datastores/0/5220/checkpoint cloud13 5220 cloud13 Wed Jul 25 15:51:59 2012 [VMM][E]: restore: Command "virsh --connect qemu:///system restore /var/lib/one//datastores/0/5220/checkpoint" failed: error: Failed to restore domain from /var/lib/one//datastores/0/5220/checkpoint Wed Jul 25 15:51:59 2012 [VMM][I]: error: Failed to create file '/var/lib/one//datastores/0/5220/checkpoint': No such file or directory Wed Jul 25 15:51:59 2012 [VMM][E]: Could not restore from /var/lib/one//datastores/0/5220/checkpoint Wed Jul 25 15:51:59 2012 [VMM][I]: ExitCode: 1 Wed Jul 25 15:51:59 2012 [VMM][I]: Failed to execute virtualization driver operation: restore. Wed Jul 25 15:51:59 2012 [VMM][E]: Error restoring VM: Could not restore from /var/lib/one//datastores/0/5220/checkpoint Wed Jul 25 15:51:59 2012 [DiM][I]: New VM state is FAILED Any thought? I have to check but I expect to have the same problem when stopping/resuming VMs. El 11/07/12 18:38, Javier Fontan escribió: You are right. I've overlooked the driver. In qcow the mv driver is dummy as it expects the qcow image to be shared. You can just copy mv script from ssh tm to qcow remotes directory. I have not tested that but it should work. The qcow image will me moved on stop to the frontend and on resume back to a node. The backing storage path should be the same in your setup. On Wed, Jul 11, 2012 at 6:07 PM, Andreas Calvo wrote: The shared storage is mounted in the same place in all the nodes. The directory structured is as follows: /var/lib/one/datastores (shared s
Re: [one-users] Using shared fs and ssh TM
No, this directory is linked to be only at the node's local filesystem -- as it is the one that holds QCOW2 deltas. Only the image datastore (/var/lib/one//datastores/1/) is shared. Main problem is R/W access to the shared filesystem, and that's why we tried to hold deltas at node's local filesystem -- by linking the system datastore to a local path on the node. So far so good, except it cannot do any migration nor save. El 30/07/12 13:50, Javier Fontan escribió: The checkpoint should be located at /var/lib/one//datastores/0/5366 in both the node and the frontend. That is the system datastore. As the driver is shared the tm is not copying the file as it is supposed to be shared using NFS or a similar shared filesystem. Is /var/lib/one//datastores/0 shared between the frontend and the nodes? On Mon, Jul 30, 2012 at 12:35 PM, Andreas Calvo wrote: Javier, Where should the checkpoint be located? I've checked in /var/lib/one/$VMID, but no checkpoint found. Only 4 files are located under this path: context.sh deployment.0 transfer.0.prolog transfer.0.stop transfer.0.stop show: MV qcow2 cloud02:/var/lib/one//datastores/0/5366/disk.0 opennebula:/var/lib/one/datastores/0/5366/disk.0 5366 1 MV shared cloud02:/var/lib/one//datastores/0/5366 opennebula:/var/lib/one/datastores/0/5366 5366 0 On the node which ran the VM, there is a directory /var/lib/one//datastores/0/$VMID with a checkpoint file in it, but I can't find it in the frontend. El 27/07/12 18:02, Javier Fontan escribió: Somehow the checkpoint did not get copied. Can you try to stop a VM and check if the checkpoint is transfered back to the frontend? Just to see if the tm is working correctly. On Fri, Jul 27, 2012 at 11:49 AM, Andreas Calvo wrote: No, System datastore (where all QCOW2 delta are stored) is not shared, it relays on the node local filesystem. However, datastore 1 (where all images are stored) it is stored and shared on all nodes under the same path. To use the same directory logic, the paths have been linked. In this case: /var/lib/one is shared on all nodes /var/lib/one/datastores/0 is linked to /one/datastores/0, which is local /one/datastores/1 is linked to /var/lib/one/datastores/1 (which is shared) If I'm not wrong, the system datastores holds the incremental changes, whereas the other datastores hold images. El 27/07/12 11:22, Javier Fontan escribió: Is your system datastore (0) shared and mounted in all your nodes? On Wed, Jul 25, 2012 at 3:56 PM, Andreas Calvo wrote: Hello again, I've tried to reuse the SSH MV script, but it fails. Output is: Wed Jul 25 15:51:59 2012 [VMM][I]: Command execution fail: /var/tmp/one/vmm/kvm/restore /var/lib/one//datastores/0/5220/checkpoint cloud13 5220 cloud13 Wed Jul 25 15:51:59 2012 [VMM][E]: restore: Command "virsh --connect qemu:///system restore /var/lib/one//datastores/0/5220/checkpoint" failed: error: Failed to restore domain from /var/lib/one//datastores/0/5220/checkpoint Wed Jul 25 15:51:59 2012 [VMM][I]: error: Failed to create file '/var/lib/one//datastores/0/5220/checkpoint': No such file or directory Wed Jul 25 15:51:59 2012 [VMM][E]: Could not restore from /var/lib/one//datastores/0/5220/checkpoint Wed Jul 25 15:51:59 2012 [VMM][I]: ExitCode: 1 Wed Jul 25 15:51:59 2012 [VMM][I]: Failed to execute virtualization driver operation: restore. Wed Jul 25 15:51:59 2012 [VMM][E]: Error restoring VM: Could not restore from /var/lib/one//datastores/0/5220/checkpoint Wed Jul 25 15:51:59 2012 [DiM][I]: New VM state is FAILED Any thought? I have to check but I expect to have the same problem when stopping/resuming VMs. El 11/07/12 18:38, Javier Fontan escribió: You are right. I've overlooked the driver. In qcow the mv driver is dummy as it expects the qcow image to be shared. You can just copy mv script from ssh tm to qcow remotes directory. I have not tested that but it should work. The qcow image will me moved on stop to the frontend and on resume back to a node. The backing storage path should be the same in your setup. On Wed, Jul 11, 2012 at 6:07 PM, Andreas Calvo wrote: The shared storage is mounted in the same place in all the nodes. The directory structured is as follows: /var/lib/one/datastores (shared storage) -- 0 -> link to /one/datastores/0 -- 1 -- 100 /one/datastores/ (local storage) -- 0 -- 1 -> link to /var/lib/one/datastores/1 -- 100 -> link to /var/lib/one/datastores/100 When a virtual machine is stopped, its delta is stored in /one/datastores/0/$VMID. When another node tries to resume the virtual machine, expects to have in the same place the delta (and some other files, like the checkpoint), but, as it is not in its local disk, it fails. Same thing happen when a virtual machine is migrated. I think the TM should be tweaked to be part qcow and part SSH. El 11/07/12 17:51, Javier Fontan escribió: If the shared datastore is mounted in the same place in both nodes there w
Re: [one-users] Using shared fs and ssh TM
Javier, Where should the checkpoint be located? I've checked in /var/lib/one/$VMID, but no checkpoint found. Only 4 files are located under this path: context.sh deployment.0 transfer.0.prolog transfer.0.stop transfer.0.stop show: MV qcow2 cloud02:/var/lib/one//datastores/0/5366/disk.0 opennebula:/var/lib/one/datastores/0/5366/disk.0 5366 1 MV shared cloud02:/var/lib/one//datastores/0/5366 opennebula:/var/lib/one/datastores/0/5366 5366 0 On the node which ran the VM, there is a directory /var/lib/one//datastores/0/$VMID with a checkpoint file in it, but I can't find it in the frontend. El 27/07/12 18:02, Javier Fontan escribió: Somehow the checkpoint did not get copied. Can you try to stop a VM and check if the checkpoint is transfered back to the frontend? Just to see if the tm is working correctly. On Fri, Jul 27, 2012 at 11:49 AM, Andreas Calvo wrote: No, System datastore (where all QCOW2 delta are stored) is not shared, it relays on the node local filesystem. However, datastore 1 (where all images are stored) it is stored and shared on all nodes under the same path. To use the same directory logic, the paths have been linked. In this case: /var/lib/one is shared on all nodes /var/lib/one/datastores/0 is linked to /one/datastores/0, which is local /one/datastores/1 is linked to /var/lib/one/datastores/1 (which is shared) If I'm not wrong, the system datastores holds the incremental changes, whereas the other datastores hold images. El 27/07/12 11:22, Javier Fontan escribió: Is your system datastore (0) shared and mounted in all your nodes? On Wed, Jul 25, 2012 at 3:56 PM, Andreas Calvo wrote: Hello again, I've tried to reuse the SSH MV script, but it fails. Output is: Wed Jul 25 15:51:59 2012 [VMM][I]: Command execution fail: /var/tmp/one/vmm/kvm/restore /var/lib/one//datastores/0/5220/checkpoint cloud13 5220 cloud13 Wed Jul 25 15:51:59 2012 [VMM][E]: restore: Command "virsh --connect qemu:///system restore /var/lib/one//datastores/0/5220/checkpoint" failed: error: Failed to restore domain from /var/lib/one//datastores/0/5220/checkpoint Wed Jul 25 15:51:59 2012 [VMM][I]: error: Failed to create file '/var/lib/one//datastores/0/5220/checkpoint': No such file or directory Wed Jul 25 15:51:59 2012 [VMM][E]: Could not restore from /var/lib/one//datastores/0/5220/checkpoint Wed Jul 25 15:51:59 2012 [VMM][I]: ExitCode: 1 Wed Jul 25 15:51:59 2012 [VMM][I]: Failed to execute virtualization driver operation: restore. Wed Jul 25 15:51:59 2012 [VMM][E]: Error restoring VM: Could not restore from /var/lib/one//datastores/0/5220/checkpoint Wed Jul 25 15:51:59 2012 [DiM][I]: New VM state is FAILED Any thought? I have to check but I expect to have the same problem when stopping/resuming VMs. El 11/07/12 18:38, Javier Fontan escribió: You are right. I've overlooked the driver. In qcow the mv driver is dummy as it expects the qcow image to be shared. You can just copy mv script from ssh tm to qcow remotes directory. I have not tested that but it should work. The qcow image will me moved on stop to the frontend and on resume back to a node. The backing storage path should be the same in your setup. On Wed, Jul 11, 2012 at 6:07 PM, Andreas Calvo wrote: The shared storage is mounted in the same place in all the nodes. The directory structured is as follows: /var/lib/one/datastores (shared storage) -- 0 -> link to /one/datastores/0 -- 1 -- 100 /one/datastores/ (local storage) -- 0 -- 1 -> link to /var/lib/one/datastores/1 -- 100 -> link to /var/lib/one/datastores/100 When a virtual machine is stopped, its delta is stored in /one/datastores/0/$VMID. When another node tries to resume the virtual machine, expects to have in the same place the delta (and some other files, like the checkpoint), but, as it is not in its local disk, it fails. Same thing happen when a virtual machine is migrated. I think the TM should be tweaked to be part qcow and part SSH. El 11/07/12 17:51, Javier Fontan escribió: If the shared datastore is mounted in the same place in both nodes there wont be any problem. The base image will be accessible from both nodes (shared) and the qcow delta is moved to the host. On Wed, Jul 11, 2012 at 12:59 PM, Andreas Calvo wrote: Javier, Thanks for your quick reply! The only problem is that when a stopped virtual machine is resumed from another host in the cloud (different from which it was stopped), as it cannot find the deployed (in this case, linked) image and the deltas. El mar 10 jul 2012 11:46:55 CEST, Javier Fontan escribió: Sure it is possible. You need to have the datastore that holds the images shared and mount it in every node. Then you'll make the system datastore(0) local and configure it to use qcow tm drivers. Make sure that /var/lib/one/datastores/0 is not mounted from the shared storage in the nodes as this is where the deltas will be written. On Mon, Jul 9, 2012 at 11:59 AM, Andreas Calvo wrot
Re: [one-users] Using shared fs and ssh TM
No, System datastore (where all QCOW2 delta are stored) is not shared, it relays on the node local filesystem. However, datastore 1 (where all images are stored) it is stored and shared on all nodes under the same path. To use the same directory logic, the paths have been linked. In this case: /var/lib/one is shared on all nodes /var/lib/one/datastores/0 is linked to /one/datastores/0, which is local /one/datastores/1 is linked to /var/lib/one/datastores/1 (which is shared) If I'm not wrong, the system datastores holds the incremental changes, whereas the other datastores hold images. El 27/07/12 11:22, Javier Fontan escribió: Is your system datastore (0) shared and mounted in all your nodes? On Wed, Jul 25, 2012 at 3:56 PM, Andreas Calvo wrote: Hello again, I've tried to reuse the SSH MV script, but it fails. Output is: Wed Jul 25 15:51:59 2012 [VMM][I]: Command execution fail: /var/tmp/one/vmm/kvm/restore /var/lib/one//datastores/0/5220/checkpoint cloud13 5220 cloud13 Wed Jul 25 15:51:59 2012 [VMM][E]: restore: Command "virsh --connect qemu:///system restore /var/lib/one//datastores/0/5220/checkpoint" failed: error: Failed to restore domain from /var/lib/one//datastores/0/5220/checkpoint Wed Jul 25 15:51:59 2012 [VMM][I]: error: Failed to create file '/var/lib/one//datastores/0/5220/checkpoint': No such file or directory Wed Jul 25 15:51:59 2012 [VMM][E]: Could not restore from /var/lib/one//datastores/0/5220/checkpoint Wed Jul 25 15:51:59 2012 [VMM][I]: ExitCode: 1 Wed Jul 25 15:51:59 2012 [VMM][I]: Failed to execute virtualization driver operation: restore. Wed Jul 25 15:51:59 2012 [VMM][E]: Error restoring VM: Could not restore from /var/lib/one//datastores/0/5220/checkpoint Wed Jul 25 15:51:59 2012 [DiM][I]: New VM state is FAILED Any thought? I have to check but I expect to have the same problem when stopping/resuming VMs. El 11/07/12 18:38, Javier Fontan escribió: You are right. I've overlooked the driver. In qcow the mv driver is dummy as it expects the qcow image to be shared. You can just copy mv script from ssh tm to qcow remotes directory. I have not tested that but it should work. The qcow image will me moved on stop to the frontend and on resume back to a node. The backing storage path should be the same in your setup. On Wed, Jul 11, 2012 at 6:07 PM, Andreas Calvo wrote: The shared storage is mounted in the same place in all the nodes. The directory structured is as follows: /var/lib/one/datastores (shared storage) -- 0 -> link to /one/datastores/0 -- 1 -- 100 /one/datastores/ (local storage) -- 0 -- 1 -> link to /var/lib/one/datastores/1 -- 100 -> link to /var/lib/one/datastores/100 When a virtual machine is stopped, its delta is stored in /one/datastores/0/$VMID. When another node tries to resume the virtual machine, expects to have in the same place the delta (and some other files, like the checkpoint), but, as it is not in its local disk, it fails. Same thing happen when a virtual machine is migrated. I think the TM should be tweaked to be part qcow and part SSH. El 11/07/12 17:51, Javier Fontan escribió: If the shared datastore is mounted in the same place in both nodes there wont be any problem. The base image will be accessible from both nodes (shared) and the qcow delta is moved to the host. On Wed, Jul 11, 2012 at 12:59 PM, Andreas Calvo wrote: Javier, Thanks for your quick reply! The only problem is that when a stopped virtual machine is resumed from another host in the cloud (different from which it was stopped), as it cannot find the deployed (in this case, linked) image and the deltas. El mar 10 jul 2012 11:46:55 CEST, Javier Fontan escribió: Sure it is possible. You need to have the datastore that holds the images shared and mount it in every node. Then you'll make the system datastore(0) local and configure it to use qcow tm drivers. Make sure that /var/lib/one/datastores/0 is not mounted from the shared storage in the nodes as this is where the deltas will be written. On Mon, Jul 9, 2012 at 11:59 AM, Andreas Calvo wrote: Hello, Is there any way to mix TM in an environment? We currently have a shared FS with GFS2 using qcow, but when a lot of VMs are launched, writing changes becomes a I/O bottleneck. We were thinking of a mixture where the image is shared and the incremental changes (qcow) are write locally. Is it possible? Thanks -- Andreas Calvo Gómez Systems Engineer Scytl Secure Electronic Voting Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona Phone: + 34 934 230 324 Fax: + 34 933 251 028 http://www.scytl.com NOTICE: The information in this e-mail and in any of its attachments is confidential and intended solely for the attention and use of the named addressee(s). If you are not the intended recipient, any disclosure, copying, distribution or retaining of this message or any part of it, without the prior written consent of Scytl Secure Electronic Voting, SA is prohibited
Re: [one-users] Using shared fs and ssh TM
Hello again, I've tried to reuse the SSH MV script, but it fails. Output is: Wed Jul 25 15:51:59 2012 [VMM][I]: Command execution fail: /var/tmp/one/vmm/kvm/restore /var/lib/one//datastores/0/5220/checkpoint cloud13 5220 cloud13 Wed Jul 25 15:51:59 2012 [VMM][E]: restore: Command "virsh --connect qemu:///system restore /var/lib/one//datastores/0/5220/checkpoint" failed: error: Failed to restore domain from /var/lib/one//datastores/0/5220/checkpoint Wed Jul 25 15:51:59 2012 [VMM][I]: error: Failed to create file '/var/lib/one//datastores/0/5220/checkpoint': No such file or directory Wed Jul 25 15:51:59 2012 [VMM][E]: Could not restore from /var/lib/one//datastores/0/5220/checkpoint Wed Jul 25 15:51:59 2012 [VMM][I]: ExitCode: 1 Wed Jul 25 15:51:59 2012 [VMM][I]: Failed to execute virtualization driver operation: restore. Wed Jul 25 15:51:59 2012 [VMM][E]: Error restoring VM: Could not restore from /var/lib/one//datastores/0/5220/checkpoint Wed Jul 25 15:51:59 2012 [DiM][I]: New VM state is FAILED Any thought? I have to check but I expect to have the same problem when stopping/resuming VMs. El 11/07/12 18:38, Javier Fontan escribió: You are right. I've overlooked the driver. In qcow the mv driver is dummy as it expects the qcow image to be shared. You can just copy mv script from ssh tm to qcow remotes directory. I have not tested that but it should work. The qcow image will me moved on stop to the frontend and on resume back to a node. The backing storage path should be the same in your setup. On Wed, Jul 11, 2012 at 6:07 PM, Andreas Calvo wrote: The shared storage is mounted in the same place in all the nodes. The directory structured is as follows: /var/lib/one/datastores (shared storage) -- 0 -> link to /one/datastores/0 -- 1 -- 100 /one/datastores/ (local storage) -- 0 -- 1 -> link to /var/lib/one/datastores/1 -- 100 -> link to /var/lib/one/datastores/100 When a virtual machine is stopped, its delta is stored in /one/datastores/0/$VMID. When another node tries to resume the virtual machine, expects to have in the same place the delta (and some other files, like the checkpoint), but, as it is not in its local disk, it fails. Same thing happen when a virtual machine is migrated. I think the TM should be tweaked to be part qcow and part SSH. El 11/07/12 17:51, Javier Fontan escribió: If the shared datastore is mounted in the same place in both nodes there wont be any problem. The base image will be accessible from both nodes (shared) and the qcow delta is moved to the host. On Wed, Jul 11, 2012 at 12:59 PM, Andreas Calvo wrote: Javier, Thanks for your quick reply! The only problem is that when a stopped virtual machine is resumed from another host in the cloud (different from which it was stopped), as it cannot find the deployed (in this case, linked) image and the deltas. El mar 10 jul 2012 11:46:55 CEST, Javier Fontan escribió: Sure it is possible. You need to have the datastore that holds the images shared and mount it in every node. Then you'll make the system datastore(0) local and configure it to use qcow tm drivers. Make sure that /var/lib/one/datastores/0 is not mounted from the shared storage in the nodes as this is where the deltas will be written. On Mon, Jul 9, 2012 at 11:59 AM, Andreas Calvo wrote: Hello, Is there any way to mix TM in an environment? We currently have a shared FS with GFS2 using qcow, but when a lot of VMs are launched, writing changes becomes a I/O bottleneck. We were thinking of a mixture where the image is shared and the incremental changes (qcow) are write locally. Is it possible? Thanks -- Andreas Calvo Gómez Systems Engineer Scytl Secure Electronic Voting Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona Phone: + 34 934 230 324 Fax: + 34 933 251 028 http://www.scytl.com NOTICE: The information in this e-mail and in any of its attachments is confidential and intended solely for the attention and use of the named addressee(s). If you are not the intended recipient, any disclosure, copying, distribution or retaining of this message or any part of it, without the prior written consent of Scytl Secure Electronic Voting, SA is prohibited and may be unlawful. If you have received this in error, please contact the sender and delete the material from any computer. ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Andreas Calvo Gómez Systems Engineer Scytl Secure Electronic Voting Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona Phone: + 34 934 230 324 Fax: + 34 933 251 028 http://www.scytl.com NOTICE: The information in this e-mail and in any of its attachments is confidential and intended solely for the attention and use of the named addressee(s). If you are not the intended recipient, any disclosure, copying, distribution or retaining of this message or any part
Re: [one-users] Using shared fs and ssh TM
The shared storage is mounted in the same place in all the nodes. The directory structured is as follows: /var/lib/one/datastores (shared storage) -- 0 -> link to /one/datastores/0 -- 1 -- 100 /one/datastores/ (local storage) -- 0 -- 1 -> link to /var/lib/one/datastores/1 -- 100 -> link to /var/lib/one/datastores/100 When a virtual machine is stopped, its delta is stored in /one/datastores/0/$VMID. When another node tries to resume the virtual machine, expects to have in the same place the delta (and some other files, like the checkpoint), but, as it is not in its local disk, it fails. Same thing happen when a virtual machine is migrated. I think the TM should be tweaked to be part qcow and part SSH. El 11/07/12 17:51, Javier Fontan escribió: If the shared datastore is mounted in the same place in both nodes there wont be any problem. The base image will be accessible from both nodes (shared) and the qcow delta is moved to the host. On Wed, Jul 11, 2012 at 12:59 PM, Andreas Calvo wrote: Javier, Thanks for your quick reply! The only problem is that when a stopped virtual machine is resumed from another host in the cloud (different from which it was stopped), as it cannot find the deployed (in this case, linked) image and the deltas. El mar 10 jul 2012 11:46:55 CEST, Javier Fontan escribió: Sure it is possible. You need to have the datastore that holds the images shared and mount it in every node. Then you'll make the system datastore(0) local and configure it to use qcow tm drivers. Make sure that /var/lib/one/datastores/0 is not mounted from the shared storage in the nodes as this is where the deltas will be written. On Mon, Jul 9, 2012 at 11:59 AM, Andreas Calvo wrote: Hello, Is there any way to mix TM in an environment? We currently have a shared FS with GFS2 using qcow, but when a lot of VMs are launched, writing changes becomes a I/O bottleneck. We were thinking of a mixture where the image is shared and the incremental changes (qcow) are write locally. Is it possible? Thanks -- Andreas Calvo Gómez Systems Engineer Scytl Secure Electronic Voting Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona Phone: + 34 934 230 324 Fax: + 34 933 251 028 http://www.scytl.com NOTICE: The information in this e-mail and in any of its attachments is confidential and intended solely for the attention and use of the named addressee(s). If you are not the intended recipient, any disclosure, copying, distribution or retaining of this message or any part of it, without the prior written consent of Scytl Secure Electronic Voting, SA is prohibited and may be unlawful. If you have received this in error, please contact the sender and delete the material from any computer. ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Andreas Calvo Gómez Systems Engineer Scytl Secure Electronic Voting Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona Phone: + 34 934 230 324 Fax: + 34 933 251 028 http://www.scytl.com NOTICE: The information in this e-mail and in any of its attachments is confidential and intended solely for the attention and use of the named addressee(s). If you are not the intended recipient, any disclosure, copying, distribution or retaining of this message or any part of it, without the prior written consent of Scytl Secure Electronic Voting, SA is prohibited and may be unlawful. If you have received this in error, please contact the sender and delete the material from any computer. -- Andreas Calvo Gómez Systems Engineer Scytl Secure Electronic Voting Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona Phone: + 34 934 230 324 Fax: + 34 933 251 028 http://www.scytl.com NOTICE: The information in this e-mail and in any of its attachments is confidential and intended solely for the attention and use of the named addressee(s). If you are not the intended recipient, any disclosure, copying, distribution or retaining of this message or any part of it, without the prior written consent of Scytl Secure Electronic Voting, SA is prohibited and may be unlawful. If you have received this in error, please contact the sender and delete the material from any computer. ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Using shared fs and ssh TM
Javier, Thanks for your quick reply! The only problem is that when a stopped virtual machine is resumed from another host in the cloud (different from which it was stopped), as it cannot find the deployed (in this case, linked) image and the deltas. El mar 10 jul 2012 11:46:55 CEST, Javier Fontan escribió: Sure it is possible. You need to have the datastore that holds the images shared and mount it in every node. Then you'll make the system datastore(0) local and configure it to use qcow tm drivers. Make sure that /var/lib/one/datastores/0 is not mounted from the shared storage in the nodes as this is where the deltas will be written. On Mon, Jul 9, 2012 at 11:59 AM, Andreas Calvo wrote: Hello, Is there any way to mix TM in an environment? We currently have a shared FS with GFS2 using qcow, but when a lot of VMs are launched, writing changes becomes a I/O bottleneck. We were thinking of a mixture where the image is shared and the incremental changes (qcow) are write locally. Is it possible? Thanks -- Andreas Calvo Gómez Systems Engineer Scytl Secure Electronic Voting Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona Phone: + 34 934 230 324 Fax: + 34 933 251 028 http://www.scytl.com NOTICE: The information in this e-mail and in any of its attachments is confidential and intended solely for the attention and use of the named addressee(s). If you are not the intended recipient, any disclosure, copying, distribution or retaining of this message or any part of it, without the prior written consent of Scytl Secure Electronic Voting, SA is prohibited and may be unlawful. If you have received this in error, please contact the sender and delete the material from any computer. ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Andreas Calvo Gómez Systems Engineer Scytl Secure Electronic Voting Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona Phone: + 34 934 230 324 Fax: + 34 933 251 028 http://www.scytl.com NOTICE: The information in this e-mail and in any of its attachments is confidential and intended solely for the attention and use of the named addressee(s). If you are not the intended recipient, any disclosure, copying, distribution or retaining of this message or any part of it, without the prior written consent of Scytl Secure Electronic Voting, SA is prohibited and may be unlawful. If you have received this in error, please contact the sender and delete the material from any computer. ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
[one-users] Using shared fs and ssh TM
Hello, Is there any way to mix TM in an environment? We currently have a shared FS with GFS2 using qcow, but when a lot of VMs are launched, writing changes becomes a I/O bottleneck. We were thinking of a mixture where the image is shared and the incremental changes (qcow) are write locally. Is it possible? Thanks -- Andreas Calvo Gómez Systems Engineer Scytl Secure Electronic Voting Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona Phone: + 34 934 230 324 Fax: + 34 933 251 028 http://www.scytl.com NOTICE: The information in this e-mail and in any of its attachments is confidential and intended solely for the attention and use of the named addressee(s). If you are not the intended recipient, any disclosure, copying, distribution or retaining of this message or any part of it, without the prior written consent of Scytl Secure Electronic Voting, SA is prohibited and may be unlawful. If you have received this in error, please contact the sender and delete the material from any computer. ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
[one-users] Shared storage performance
Hello, We are facing a performance issue in our opennebula infrastructure, and I'd like to heard your opinion on the best approach to solve it. We have 15 nodes plus 1 front-end. They all have the same shared storage thru iscsi, and they mount the opennebula home folder (/var/lib/one) which is a GFS2 partition. All machines are based on CentOS 6.2, using QEMU-KVM. We use the cloud to perform tests against a 120 VMs farm. As we are using QCOW2, it really decreases the need to write changes to disk. However, all machines need to copy over 1G of data every time they start, and this really collapse our iscsi network, until some machines receive a timeout accessing to data which stops the test. Opennebula infrastructure suffers from a read/write penalty leaving some VMs in pending state and the system (almost) non-responsive. We are not using at all the local disk of the nodes. It seems that the only option is to use the local disk to write disk changes, but I wanted to know what's your experienced opinion on our problem. Thanks! ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] write changes using "save as" and qcow2
Jaime, Yes, it shows as if the image was going to be created. VIRTUAL MACHINE 1601 INFORMATION ID : 1601 NAME: one-1601 USER: oneadmin GROUP : oneadmin STATE : ACTIVE LCM_STATE : RUNNING HOSTNAME: cloud12 START TIME : 05/16 12:31:22 END TIME: - DEPLOY ID : one-1601 VIRTUAL MACHINE MONITORING NET_RX : 0 USED CPU: 0 USED MEMORY : 0 NET_TX : 0 PERMISSIONS OWNER : um- GROUP : --- OTHER : --- VIRTUAL MACHINE TEMPLATE CONTEXT=[ DNS1="192.168.0.100", DNS2="192.168.0.101", FILES="/var/lib/one/templates/context-scripts/linux/jmeter/init.sh /var/lib/one/templates/id_rsa.pub", GATEWAY="10.4.16.1", HOSTNAME="jdetect-1601", ROOT_PUBKEY="id_rsa.pub", TARGET="hdc", USERNAME="technical", USER_PUBKEY="id_rsa.pub" ] CPU="0.5" DISK=[ BUS="virtio", CLONE="YES", CLUSTER_ID="100", DATASTORE="default", DATASTORE_ID="1", DISK_ID="0", DRIVER="qcow2", IMAGE="jdetect-node3", IMAGE_ID="42", READONLY="NO", SAVE="YES", SAVE_AS="80", SAVE_AS_SOURCE="/var/lib/one/datastores/1/d2a57eb9dfa252c1ec5b044ff351b392", SOURCE="/var/lib/one/datastores/1/f971c24769f5312679fd455162a76fbc", TARGET="vda", TM_MAD="qcow2", TYPE="DISK" ] FEATURES=[ ACPI="yes" ] GRAPHICS=[ PORT="7501", TYPE="vnc" ] MEMORY="1024" NAME="one-1601" NIC=[ BRIDGE="onebr4", CLUSTER_ID="100", IP="10.4.16.99", MAC="02:00:0a:04:10:63", NETWORK="opennebula", NETWORK_ID="4", PHYDEV="eth1", VLAN="YES", VLAN_ID="102" ] OS=[ ARCH="x86_64", BOOT="hd" ] RAW=[ TYPE="kvm" ] REQUIREMENTS="CLUSTER_ID = 100" TEMPLATE_ID="14" VCPU="1" VMID="1601" VIRTUAL MACHINE HISTORY SEQHOSTNAME REASON STARTTIME PTIME 0 cloud12 none 05/16 12:31:330d 00:000d 00:00 El 16/05/12 12:22, Jaime Melis escribió: Hello Andreas, I can't seem to find anything wrong with your setup. I have a question, though, once you do onevm saveas does it show in "onevm show " like this? $ onevm show [...] DISK = [ [...] SAVE="YES", SAVE_AS="4", SAVE_AS_SOURCE="/var/lib/one/datastores/1/9e2ca60e8780e40f79d13e7d587b471c", [...] cheers, Jaime On Wed, May 9, 2012 at 10:22 AM, Andreas Calvo mailto:andreas.ca...@scytl.com>> wrote: Sure! /var/lib/one/config: AUTH_MAD=ARGUMENTS=--authn ssh,x509,ldap,server_cipher,server_x509,EXECUTABLE=one_auth_mad DATASTORE_LOCATION=/var/lib/one//datastores DATASTORE_MAD=ARGUMENTS=-t 15 -d fs,vmware,iscsi,EXECUTABLE=one_datastore DB=BACKEND=sqlite DEBUG_LEVEL=3 DEFAULT_DEVICE_PREFIX=hd DEFAULT_IMAGE_TYPE=OS ENABLE_OTHER_PERMISSIONS=YES HM_MAD=EXECUTABLE=one_hm HOST_MONITORING_INTERVAL=600 HOST_PER_INTERVAL=15 IMAGE_RESTRICTED_ATTR=SOURCE IM_MAD=ARGUMENTS=-r 0 -t 15 kvm,EXECUTABLE=one_im_ssh,NAME=im_kvm MAC_PREFIX=02:00 MANAGER_TIMER=15 NETWORK_SIZE=254 PORT=2633 SCRIPTS_REMOTE_DIR=/var/tmp/one SESSION_EXPIRATION_TIME=900 TM_MAD=ARGUMENTS=-t 15 -d dummy,shared,qcow2,ssh,vmware,iscsi,EXECUTABLE=one_tm VM_MAD=ARGUMENTS=-t 15 -r 0 kvm,DEFAULT=vmm_exec/vmm_exec_kvm.conf,EXECUTABLE=one_vmm_exec,NAME=vmm_kvm,TYPE=kvm VM_PER_INTERVAL=5 VM_POLLING_INTERVAL=600 VM_RESTRICTED_ATTR=DISK/SOURCE VM_RESTRICTED_ATTR=NIC/MAC VM_RESTRICTED_ATTR=NIC/VLAN_ID VM_RESTRICTED_ATTR=RANK VNC_BASE_PORT=5900 /var/log/one/1319.log: Wed May 9 10:14:48 2012 [TM][I]: ExitCode: 0 Wed May 9 10:14:49 2012 [LCM][I]: New VM state is BOOT Wed May 9 10:14:49 2012 [VMM][I]: Generating deployment file: /var/lib/one/1319/deployment.0 Wed May 9 10:14:49 2012 [VMM][I]: ExitCode: 0 Wed May 9 10:14:49 2012 [VMM][I]: Successfully execute network driver operation: pre. Wed May 9 10:14:49 2012 [VMM][I]: ExitCode: 0 Wed May 9 10:14:49 2012 [VMM][I]: Successfully execute virtualization driver operation: deploy. Wed May 9 10:14:49 2012 [VMM][I]: ExitCode: 0 Wed May 9 10:14:49 2012 [VMM][I]: Successfully execute network driver operation: post. Wed May 9 10:14:50 2012 [LCM][I]: New VM state is RUNNING Wed May 9 10:16:36 2012 [LCM][I]: New VM state is SHUTDOWN VM in Shutdown state and nothing else is done (no EPILOG). oneimage list:
Re: [one-users] write changes using "save as" and qcow2
59 68 69 Thanks! On Tue, 2012-05-08 at 17:13 +0200, Jaime Melis wrote: > Hello Andreas, > > > I'm tested the qcow2 drivers again with saveas a they do work for me. > Could you attach more logs in order to debug this? > > > /var/lib/one/config (be sure to blank the passwords if any) > /var/log/one/.log > > > and onedastatore show, and oneimage show of the image and its > datastore. > > > Regards, > Jaime > > On Mon, May 7, 2012 at 10:48 AM, Andreas Calvo > wrote: > I does not get to the EPILOG state, so it's is not saving > changes. > Do I have to enable the qcow driver in the datastore drivers? > -- > Andreas Calvo Gómez > Systems Engineer > Scytl Secure Electronic Voting > Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona > Phone: + 34 934 230 324 > Fax: + 34 933 251 028 > http://www.scytl.com > > NOTICE: The information in this e-mail and in any of its > attachments is > confidential and intended solely for the attention and use of > the named > addressee(s). If you are not the intended recipient, any > disclosure, > copying, > distribution or retaining of this message or any part of it, > without the > prior > written consent of Scytl Secure Electronic Voting, SA is > prohibited and > may be > unlawful. If you have received this in error, please contact > the sender > and > delete the material from any computer. > > On Fri, 2012-05-04 at 14:03 -0400, Shankhadeep Shome wrote: > > No you don't have to change image to persistent, it wouldn't > change > > the running vm anyways. After the shutdown there are two > more states, > > EPILOG and DONE. EPILOG is reached after the VM is > completely shutdown > > and reported back, DONE is after the VM files are deleted > from the > > system datastore, you should see both steps in the vm log. > Maybe you > > can post the last few lines of the vm log, just go > > to /var/log/one/.log on the opennebula server and post > the last > > 20 lines. > > > > On Fri, May 4, 2012 at 5:07 AM, Andreas Calvo > > wrote: > > What I see when I run the SHUTDOWN command is: > > Fri May 4 11:03:11 2012 [LCM][I]: New VM state is > SHUTDOWN > > > > Is there something else I should do? > > Image is not marked as persistent, should it be > changed? > > > > On Thu, 2012-05-03 at 23:51 -0400, Shankhadeep Shome > wrote: > > > If you use a qcow backing store this is what > happens in the > > background > > > > > > qemu-img create -backing_store (your original > image) -f > > qcow2 (running > > > image) > > > > > > when you save as its > > > > > > qemu-img convert (running image + backing store) > -O qcow2 > > (new base > > > image) > > > > > > The vm log should look like this... check to see > if you have > > errors. > > > How big is your source image? Conversion can take > a while > > depending on > > > your image size and backing store. > > > > > > Thu May 3 23:41:28 2012 [LCM][I]: New VM state is > SHUTDOWN > > > Thu May 3 23:41:48 2012 [VMM][I]: ExitCode: 0 > > > Thu May 3 23:41:48 2012 [VMM][I]: Successfully > execute > > virtualization > > > driver operation: shutdown. > > > Thu May 3 23:41:48 2012 [VMM][I]: ExitCode: 0 > > > Thu May 3 23:41:48 2012 [VMM][I]: Successfully > execute > > network driver > > > operation: clean. > > > Thu May 3 23:41:49 2012 [LCM][I]: New VM state is > EPILOG > > > Thu May
Re: [one-users] about restricted attributes in ACLs
Thanks, I didn't about the latter one, and will use it. On Mon, 2012-05-07 at 23:06 +0200, Ruben S. Montero wrote: > Hi, > > You can either > > 1.- Create the templates with oneadmin and set the permissions so > everybody or a set of users can use it (this way the template is > considered secure). This can be done with onetemplate chmod or setting > up an ACL for more complex sharing needs. > > 2.- Remove CONTEXT/FILES as a VM_RESTRICTED_ATTR in oned.conf so > making FILES a valid attribute for every one. > > Cheers > > Ruben > > On Mon, May 7, 2012 at 6:10 PM, Andreas Calvo wrote: > > As per redmine issue http://dev.opennebula.org/issues/1159 , it seems > > that only oneadmin templates are not being checked. > > In my scenario, users should be able to create their own templates (or > > copy from oneadmin's) and fire up instances accessing CONTEXT/FILES. > > > > I've granted: > > 15 @101 --N-- * u--- > > 22 @101 -H--- * -m-- > > 23 @101 V--I-T--- @101 umac > > 25 @101 V--I-T--- * ---c > > > > But when a user creates it's own template and tries to start it, it > > complains about restricted attributes in CONTEXT/FILES. > > > > Is it correct to do it that way? > > > > Thanks > > > > ___ > > Users mailing list > > Users@lists.opennebula.org > > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org > > > ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
[one-users] about restricted attributes in ACLs
As per redmine issue http://dev.opennebula.org/issues/1159 , it seems that only oneadmin templates are not being checked. In my scenario, users should be able to create their own templates (or copy from oneadmin's) and fire up instances accessing CONTEXT/FILES. I've granted: 15 @101 --N-- * u--- 22 @101 -H--- * -m-- 23 @101 V--I-T--- @101 umac 25 @101 V--I-T--- * ---c But when a user creates it's own template and tries to start it, it complains about restricted attributes in CONTEXT/FILES. Is it correct to do it that way? Thanks ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] write changes using "save as" and qcow2
I does not get to the EPILOG state, so it's is not saving changes. Do I have to enable the qcow driver in the datastore drivers? -- Andreas Calvo Gómez Systems Engineer Scytl Secure Electronic Voting Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona Phone: + 34 934 230 324 Fax: + 34 933 251 028 http://www.scytl.com NOTICE: The information in this e-mail and in any of its attachments is confidential and intended solely for the attention and use of the named addressee(s). If you are not the intended recipient, any disclosure, copying, distribution or retaining of this message or any part of it, without the prior written consent of Scytl Secure Electronic Voting, SA is prohibited and may be unlawful. If you have received this in error, please contact the sender and delete the material from any computer. On Fri, 2012-05-04 at 14:03 -0400, Shankhadeep Shome wrote: > No you don't have to change image to persistent, it wouldn't change > the running vm anyways. After the shutdown there are two more states, > EPILOG and DONE. EPILOG is reached after the VM is completely shutdown > and reported back, DONE is after the VM files are deleted from the > system datastore, you should see both steps in the vm log. Maybe you > can post the last few lines of the vm log, just go > to /var/log/one/.log on the opennebula server and post the last > 20 lines. > > On Fri, May 4, 2012 at 5:07 AM, Andreas Calvo > wrote: > What I see when I run the SHUTDOWN command is: > Fri May 4 11:03:11 2012 [LCM][I]: New VM state is SHUTDOWN > > Is there something else I should do? > Image is not marked as persistent, should it be changed? > > On Thu, 2012-05-03 at 23:51 -0400, Shankhadeep Shome wrote: > > If you use a qcow backing store this is what happens in the > background > > > > qemu-img create -backing_store (your original image) -f > qcow2 (running > > image) > > > > when you save as its > > > > qemu-img convert (running image + backing store) -O qcow2 > (new base > > image) > > > > The vm log should look like this... check to see if you have > errors. > > How big is your source image? Conversion can take a while > depending on > > your image size and backing store. > > > > Thu May 3 23:41:28 2012 [LCM][I]: New VM state is SHUTDOWN > > Thu May 3 23:41:48 2012 [VMM][I]: ExitCode: 0 > > Thu May 3 23:41:48 2012 [VMM][I]: Successfully execute > virtualization > > driver operation: shutdown. > > Thu May 3 23:41:48 2012 [VMM][I]: ExitCode: 0 > > Thu May 3 23:41:48 2012 [VMM][I]: Successfully execute > network driver > > operation: clean. > > Thu May 3 23:41:49 2012 [LCM][I]: New VM state is EPILOG > > Thu May 3 23:42:42 2012 [TM][I]: mvds: > > Moving /var/lib/one/datastores/0/88/disk.0 to datastore > > > as /var/lib/one/datastores/101/4f062daaf6ad2f47fd36c6b35a0bd56c > > Thu May 3 23:42:42 2012 [TM][I]: ExitCode: 0 > > Thu May 3 23:42:43 2012 [TM][I]: delete: > > Deleting /var/lib/one/datastores/0/88 > > Thu May 3 23:42:43 2012 [TM][I]: ExitCode: 0 > > Thu May 3 23:42:43 2012 [DiM][I]: New VM state is DONE > > > > On Thu, May 3, 2012 at 2:44 PM, Andreas Calvo > > wrote: > > > > > > Hi all, > > > What is the procedure to save a running VM as an image if > it's > > running > > > with QCOW2? > > > > > > As per the documentation, I've used shutdown after issuing > save as, > > but > > > the image gets in a LOCKED stated and do not progress. > > > If the VM gets deleted, the image changes to READY but I > think it's > > just > > > a link to the old image used by the VM. Trying to start a > new > > template > > > using this new image throws an error. > > > > > > Tried with SHUTDOWN and STOP without luck. > > > > > > I guess I'm missing something, does anyone how to do it? > > > > > > Thanks in advance > > > > > > ___ > > > Users mailing list > > > Users@lists.opennebula.org > > > > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org > > > > > ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] write changes using "save as" and qcow2
What I see when I run the SHUTDOWN command is: Fri May 4 11:03:11 2012 [LCM][I]: New VM state is SHUTDOWN Is there something else I should do? Image is not marked as persistent, should it be changed? On Thu, 2012-05-03 at 23:51 -0400, Shankhadeep Shome wrote: > If you use a qcow backing store this is what happens in the background > > qemu-img create -backing_store (your original image) -f qcow2 (running > image) > > when you save as its > > qemu-img convert (running image + backing store) -O qcow2 (new base > image) > > The vm log should look like this... check to see if you have errors. > How big is your source image? Conversion can take a while depending on > your image size and backing store. > > Thu May 3 23:41:28 2012 [LCM][I]: New VM state is SHUTDOWN > Thu May 3 23:41:48 2012 [VMM][I]: ExitCode: 0 > Thu May 3 23:41:48 2012 [VMM][I]: Successfully execute virtualization > driver operation: shutdown. > Thu May 3 23:41:48 2012 [VMM][I]: ExitCode: 0 > Thu May 3 23:41:48 2012 [VMM][I]: Successfully execute network driver > operation: clean. > Thu May 3 23:41:49 2012 [LCM][I]: New VM state is EPILOG > Thu May 3 23:42:42 2012 [TM][I]: mvds: > Moving /var/lib/one/datastores/0/88/disk.0 to datastore > as /var/lib/one/datastores/101/4f062daaf6ad2f47fd36c6b35a0bd56c > Thu May 3 23:42:42 2012 [TM][I]: ExitCode: 0 > Thu May 3 23:42:43 2012 [TM][I]: delete: > Deleting /var/lib/one/datastores/0/88 > Thu May 3 23:42:43 2012 [TM][I]: ExitCode: 0 > Thu May 3 23:42:43 2012 [DiM][I]: New VM state is DONE > > On Thu, May 3, 2012 at 2:44 PM, Andreas Calvo > wrote: > > > > Hi all, > > What is the procedure to save a running VM as an image if it's > running > > with QCOW2? > > > > As per the documentation, I've used shutdown after issuing save as, > but > > the image gets in a LOCKED stated and do not progress. > > If the VM gets deleted, the image changes to READY but I think it's > just > > a link to the old image used by the VM. Trying to start a new > template > > using this new image throws an error. > > > > Tried with SHUTDOWN and STOP without luck. > > > > I guess I'm missing something, does anyone how to do it? > > > > Thanks in advance > > > > ___ > > Users mailing list > > Users@lists.opennebula.org > > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org > ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
[one-users] write changes using "save as" and qcow2
Hi all, What is the procedure to save a running VM as an image if it's running with QCOW2? As per the documentation, I've used shutdown after issuing save as, but the image gets in a LOCKED stated and do not progress. If the VM gets deleted, the image changes to READY but I think it's just a link to the old image used by the VM. Trying to start a new template using this new image throws an error. Tried with SHUTDOWN and STOP without luck. I guess I'm missing something, does anyone how to do it? Thanks in advance ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
[one-users] external DHCP
Hello, We're building a VM farm with OpenNebula. However, I'm a little bit confused about how to handle networking. Our scenario is based on VLAN tagging and external DHCP server for every VLAN. Our goal is to use that external DHCP server to provide IPs for all VMs. When creating a Virtual Network, a mandatory parameter is the range or fixed address (otherwise it fails complaining it cannot allocate the VM Network). We've tried Open vSwitch (without luck -- fails to create the vm directory) and 802.1Q (needs an explicit range/fixed address to instantiate a template). Is it possible to use a external DHCP server? Is there any issue using any VLAN tagging service (802.1Q or Open vSwitch)? Thanks! -- Andreas Calvo Gómez Systems Engineer Scytl Secure Electronic Voting Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona Phone: + 34 934 230 324 Fax: + 34 933 251 028 http://www.scytl.com NOTICE: The information in this e-mail and in any of its attachments is confidential and intended solely for the attention and use of the named addressee(s). If you are not the intended recipient, any disclosure, copying, distribution or retaining of this message or any part of it, without the prior written consent of Scytl Secure Electronic Voting, SA is prohibited and may be unlawful. If you have received this in error, please contact the sender and delete the material from any computer. ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org