Re: [one-users] newbie how-to configure network to connect to the vm
Hello Nikolaj, I think the problem is that your AR uses a wrong network prefix. virbr0 is configured with 192.168.122.0/24 prefix and your definition in the AR is IP = 192.168.0.100. Best, Valentin On Thu, Nov 20, 2014 at 6:20 PM, Nikolaj Majorov niko...@majorov.biz wrote: hi Thomas, thanks for help. On the open nebula host server : ——- [root@CentOS-70-64-minimal ~]# ip a 1: lo: LOOPBACK,UP,LOWER_UP mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast master br0 state UP qlen 1000 link/ether 6c:62:6d:d9:09:05 brd ff:ff:ff:ff:ff:ff inet6 fe80::6e62:6dff:fed9:905/64 scope link valid_lft forever preferred_lft forever 3: br0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue state UP link/ether 6c:62:6d:d9:09:05 brd ff:ff:ff:ff:ff:ff inet 46.4.99.35 peer 46.4.99.33/32 brd 46.4.99.35 scope global br0 valid_lft forever preferred_lft forever inet6 fe80::6e62:6dff:fed9:905/64 scope link valid_lft forever preferred_lft forever 4: virbr0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue state UP link/ether fe:00:c0:a8:00:64 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever 9: vnet0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast master virbr0 state UNKNOWN qlen 500 link/ether fe:00:c0:a8:00:64 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc00:c0ff:fea8:64/64 scope link valid_lft forever preferred_lft forever 10: vnet1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast master virbr0 state UNKNOWN qlen 500 link/ether fe:00:c0:a8:00:65 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc00:c0ff:fea8:65/64 scope link valid_lft forever preferred_lft forever 11: vnet2: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast master virbr0 state UNKNOWN qlen 500 link/ether fe:00:c0:a8:00:66 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc00:c0ff:fea8:66/64 scope link valid_lft forever preferred_lft forever [root@CentOS-70-64-minimal ~]# ip r default via 46.4.99.33 dev br0 46.4.99.33 dev br0 proto kernel scope link src 46.4.99.35 169.254.0.0/16 dev br0 scope link metric 1003 192.168.0.102 dev virbr0 scope link 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 [root@CentOS-70-64-minimal ~]# ip neigh show 192.168.0.102 dev virbr0 lladdr 02:00:c0:a8:00:66 STALE 192.168.0.101 dev br0 lladdr 02:00:c0:a8:00:65 STALE 192.168.122.56 dev virbr0 lladdr 02:00:c0:a8:00:64 STALE 192.168.0.100 dev virbr0 FAILED 192.168.0.100 dev br0 FAILED 46.4.99.33 dev br0 lladdr 00:26:88:75:e6:88 REACHABLE [root@CentOS-70-64-minimal ~]# iptables -nvL Chain INPUT (policy ACCEPT 767K packets, 2018M bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT udp -- virbr0 * 0.0.0.0/0 0.0.0.0/0udp dpt:53 0 0 ACCEPT tcp -- virbr0 * 0.0.0.0/0 0.0.0.0/0tcp dpt:53 17 5576 ACCEPT udp -- virbr0 * 0.0.0.0/0 0.0.0.0/0udp dpt:67 0 0 ACCEPT tcp -- virbr0 * 0.0.0.0/0 0.0.0.0/0tcp dpt:67 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- * virbr0 0.0.0.0/0 192.168.122.0/24 ctstate RELATED,ESTABLISHED 0 0 ACCEPT all -- virbr0 * 192.168.122.0/24 0.0.0.0/0 0 0 ACCEPT all -- virbr0 virbr0 0.0.0.0/0 0.0.0.0/0 0 0 REJECT all -- * virbr0 0.0.0.0/0 0.0.0.0/0reject-with icmp-port-unreachable 0 0 REJECT all -- virbr0 * 0.0.0.0/0 0.0.0.0/0reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT 685K packets, 270M bytes) pkts bytes target prot opt in out source destination but the iptables seems not running: [root@CentOS-70-64-minimal ~]# service iptables status Redirecting to /bin/systemctl status iptables.service iptables.service - IPv4 firewall with iptables Loaded: loaded (/usr/lib/systemd/system/iptables.service; enabled) Active: inactive (dead) Nov 20 16:38:39 CentOS-70-64-minimal systemd[1]: Stopped IPv4 firewall with iptables. from within vm (can connect only to one with vnc so can’t simple cut and paste): +++ #ifconfig -a eth0: Link… inet addr:192.168.0.102 Bcast:192.168.0.255 Mask: 255.255.255.0 # route -n Kernel IP routing table Destingation G DestingationGateway Genmask Flags Metric RefUse
Re: [one-users] newbie how-to configure network to connect to the vm
Hi Nikolaj, Thomas deservers all the credit, he is wise enough to ask the right questions. After this was just a matter of text scraping :). Best Valentin On Thu, Nov 20, 2014 at 7:23 PM, Nikolaj Majorov niko...@majorov.biz wrote: Hi, many thanks Valentin ! Thanks Thomas ! Valentin you are right ! changing prefix , allow me to connect to the guest vm… Cool ! regards, Nikolaj On 20.11.2014, at 17:56, Valentin Bud valentin@gmail.com wrote: Hello Nikolaj, I think the problem is that your AR uses a wrong network prefix. virbr0 is configured with 192.168.122.0/24 prefix and your definition in the AR is IP = 192.168.0.100. Best, Valentin On Thu, Nov 20, 2014 at 6:20 PM, Nikolaj Majorov niko...@majorov.biz wrote: hi Thomas, thanks for help. On the open nebula host server : ——- [root@CentOS-70-64-minimal ~]# ip a 1: lo: LOOPBACK,UP,LOWER_UP mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast master br0 state UP qlen 1000 link/ether 6c:62:6d:d9:09:05 brd ff:ff:ff:ff:ff:ff inet6 fe80::6e62:6dff:fed9:905/64 scope link valid_lft forever preferred_lft forever 3: br0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue state UP link/ether 6c:62:6d:d9:09:05 brd ff:ff:ff:ff:ff:ff inet 46.4.99.35 peer 46.4.99.33/32 brd 46.4.99.35 scope global br0 valid_lft forever preferred_lft forever inet6 fe80::6e62:6dff:fed9:905/64 scope link valid_lft forever preferred_lft forever 4: virbr0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue state UP link/ether fe:00:c0:a8:00:64 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever 9: vnet0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast master virbr0 state UNKNOWN qlen 500 link/ether fe:00:c0:a8:00:64 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc00:c0ff:fea8:64/64 scope link valid_lft forever preferred_lft forever 10: vnet1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast master virbr0 state UNKNOWN qlen 500 link/ether fe:00:c0:a8:00:65 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc00:c0ff:fea8:65/64 scope link valid_lft forever preferred_lft forever 11: vnet2: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast master virbr0 state UNKNOWN qlen 500 link/ether fe:00:c0:a8:00:66 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc00:c0ff:fea8:66/64 scope link valid_lft forever preferred_lft forever [root@CentOS-70-64-minimal ~]# ip r default via 46.4.99.33 dev br0 46.4.99.33 dev br0 proto kernel scope link src 46.4.99.35 169.254.0.0/16 dev br0 scope link metric 1003 192.168.0.102 dev virbr0 scope link 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 [root@CentOS-70-64-minimal ~]# ip neigh show 192.168.0.102 dev virbr0 lladdr 02:00:c0:a8:00:66 STALE 192.168.0.101 dev br0 lladdr 02:00:c0:a8:00:65 STALE 192.168.122.56 dev virbr0 lladdr 02:00:c0:a8:00:64 STALE 192.168.0.100 dev virbr0 FAILED 192.168.0.100 dev br0 FAILED 46.4.99.33 dev br0 lladdr 00:26:88:75:e6:88 REACHABLE [root@CentOS-70-64-minimal ~]# iptables -nvL Chain INPUT (policy ACCEPT 767K packets, 2018M bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT udp -- virbr0 * 0.0.0.0/0 0.0.0.0/0udp dpt:53 0 0 ACCEPT tcp -- virbr0 * 0.0.0.0/0 0.0.0.0/0tcp dpt:53 17 5576 ACCEPT udp -- virbr0 * 0.0.0.0/0 0.0.0.0/0udp dpt:67 0 0 ACCEPT tcp -- virbr0 * 0.0.0.0/0 0.0.0.0/0tcp dpt:67 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- * virbr0 0.0.0.0/0 192.168.122.0/24 ctstate RELATED,ESTABLISHED 0 0 ACCEPT all -- virbr0 * 192.168.122.0/24 0.0.0.0/0 0 0 ACCEPT all -- virbr0 virbr0 0.0.0.0/0 0.0.0.0/0 0 0 REJECT all -- * virbr0 0.0.0.0/0 0.0.0.0/0reject-with icmp-port-unreachable 0 0 REJECT all -- virbr0 * 0.0.0.0/0 0.0.0.0/0reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT 685K packets, 270M bytes) pkts bytes target prot opt in out source destination but the iptables seems not running: [root@CentOS-70-64-minimal ~]# service iptables status Redirecting to /bin/systemctl status iptables.service iptables.service - IPv4 firewall with iptables Loaded: loaded (/usr/lib/systemd/system/iptables.service; enabled) Active: inactive (dead) Nov 20 16:38:39
Re: [one-users] Bugs in documentation - CentOS
Hello Jaco, Maybe Javier's presentation OpenNebula and tips for CentOS 7 [1] might help you achieve what you desire. [1]: http://www.slideshare.net/opennebula Best, Valentin On Sun, Oct 19, 2014 at 3:59 AM, Jaco bakgat...@gmail.com wrote: Hi folks, (1st post) I've toyed with ON before, but decided to finally commit. I scratched my server, installed CentOS 7 (minimal) followed guide provided here: http://docs.opennebula.org/4.8/design_and_installation/quick_starts/qs_centos7_kvm.html (context: I've been using Ubuntu/Debian for a very long time, but recently decided to commit to CentOS/Fedora, so I'm a little rusty in places) Overall it went OK, but not great. A few things that tripped me up: * CentOS 7 by default comes with firewalld - something that's not covered under official docco's. I initially thought it was iptables preventing access from LAN, but managed to find this issue via accessing services through SSH tunnel * Telling people to 'disable SELinux' is simply a bad idea sets a bad precedent encourages lax security practices IMHO. It's there for a reason. For now I've set it to be permissive rather than disabled, but will re-enforce it again later. Otherwise I've followed the guide dutifully, but I'm unable to provision my 1st instance. I get this in the log: Sun Oct 19 13:56:54 2014 [Z0][DiM][I]: New VM state is ACTIVE. Sun Oct 19 13:56:54 2014 [Z0][LCM][I]: New VM state is PROLOG. Sun Oct 19 13:56:56 2014 [Z0][LCM][I]: New VM state is BOOT Sun Oct 19 13:56:56 2014 [Z0][VMM][I]: Generating deployment file: /var/lib/one/vms/5/deployment.0 Sun Oct 19 13:56:56 2014 [Z0][VMM][I]: Remote worker node files not found Sun Oct 19 13:56:56 2014 [Z0][VMM][I]: Updating remotes Sun Oct 19 13:56:57 2014 [Z0][VMM][I]: Command execution fail: /var/tmp/one/vnm/tin/pre $REDACTED_HASH Sun Oct 19 13:56:57 2014 [Z0][VMM][I]: bash: line 2: /var/tmp/one/vnm/tin/pre: No such file or directory Sun Oct 19 13:56:57 2014 [Z0][VMM][I]: ExitCode: 127 Sun Oct 19 13:56:57 2014 [Z0][VMM][I]: Failed to execute network driver operation: pre. Sun Oct 19 13:56:57 2014 [Z0][VMM][E]: Error deploying virtual machine Sun Oct 19 13:56:58 2014 [Z0][DiM][I]: New VM state is FAILED /var/tmp/one/vnm/tin/pre/ did not exist, so I created it as user oneadmin Virtual network is named default, template CentOS-7 image CentOS-7-one-4.8 (ad per docco). Default setup out of the box does not work, and/or documentation is incomplete. What am I missing? Please advise - J ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Running Opennebula on AWS
Hello Xiyi Zhu, I have just created and tested an OpenNebula AMI running on Ubuntu 14.04. Might be the same as the official OpenNebula AWS sandbox [1]. I don't know if they are the same, I have never tried it. I will when I have time and rebuild the AMI to reflect the official one. The AMI name is OpenNebula-4.8-Ubuntu-Trusty - ami-634b0453. The AMI is in us-west-2 (Oregon). The deployment uses the ttylinux.raw image from OpenNebula, a simple template to launch that image and a default network. The localhost is created with onehost create -i kvm -v kvm -n dummy localhost. The network is the libvirt default network on bridge virbr0. You can access Sunstone at AWS PUBLIC DNS:9869 [1]: http://opennebula.org/tryout/sandboxaws/ Enjoy. Best, Valentin On Fri, Oct 17, 2014 at 4:20 AM, XIYI ZHU zhux...@hotmail.com wrote: Hello, My name is Xiyi Zhu. I work for Dev Support department. I have some customer asked about using OpenNebula on AWS. Here is the link they followed: http://opennebula.org/tryout/sandboxaws/ However, the AMI you provided is instance-store AMIs. Do you guys provide any EBS back AMIs since the root volume of instance-store AMIs is 10G, fixed? If you do, please provide me the AMI IDs and the region they are in. If not, you have any a way to convert instance-store AMIs to EBS AMI that work for Opennubela? I tried the procedure that AWS has. The web interface works after the conversion is done, it couldn’t start the VMs. It works if it was in instance-store root volume. Or you can provide some instruction that install opennebula in EC2 instance, especially for CentOS or Ubuntu? Thank you ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Running Opennebula on AWS
Hello, I have followed the documentation about installing OpenNebula on Ubuntu [1] and used Packer [2] to build it. It would be nice if you could tell us if the AMI works. Thanks! [1]: http://docs.opennebula.org/4.8/design_and_installation/quick_starts/qs_ubuntu_kvm.html [2]: http://www.packer.io Best, Valentin On Fri, Oct 17, 2014 at 8:43 PM, XIYI ZHU zhux...@hotmail.com wrote: Hello, Could you provide me the steps how you accomplish it? Thank you -- From: valentin@gmail.com Date: Fri, 17 Oct 2014 16:55:48 +0300 Subject: Re: [one-users] Running Opennebula on AWS To: zhux...@hotmail.com CC: users@lists.opennebula.org Hello Xiyi Zhu, I have just created and tested an OpenNebula AMI running on Ubuntu 14.04. Might be the same as the official OpenNebula AWS sandbox [1]. I don't know if they are the same, I have never tried it. I will when I have time and rebuild the AMI to reflect the official one. The AMI name is OpenNebula-4.8-Ubuntu-Trusty - ami-634b0453. The AMI is in us-west-2 (Oregon). The deployment uses the ttylinux.raw image from OpenNebula, a simple template to launch that image and a default network. The localhost is created with onehost create -i kvm -v kvm -n dummy localhost. The network is the libvirt default network on bridge virbr0. You can access Sunstone at AWS PUBLIC DNS:9869 [1]: http://opennebula.org/tryout/sandboxaws/ Enjoy. Best, Valentin On Fri, Oct 17, 2014 at 4:20 AM, XIYI ZHU zhux...@hotmail.com wrote: Hello, My name is Xiyi Zhu. I work for Dev Support department. I have some customer asked about using OpenNebula on AWS. Here is the link they followed: http://opennebula.org/tryout/sandboxaws/ However, the AMI you provided is instance-store AMIs. Do you guys provide any EBS back AMIs since the root volume of instance-store AMIs is 10G, fixed? If you do, please provide me the AMI IDs and the region they are in. If not, you have any a way to convert instance-store AMIs to EBS AMI that work for Opennubela? I tried the procedure that AWS has. The web interface works after the conversion is done, it couldn’t start the VMs. It works if it was in instance-store root volume. Or you can provide some instruction that install opennebula in EC2 instance, especially for CentOS or Ubuntu? Thank you ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] hardware requirements
Hello Cyrill, When you say hardware requirements are you referring to the OpenNebula frontend node and/or compute nodes? The best answer I can find is 'it depends' :). I have a couple of OpenNebula deployments in the wild and I will tell you what kind of hardware I use. Most probably others have different configurations. 16GB RAM, Intel Quad Core CPU, 1 TB Storage space - Frontend 16GB RAM, Intel Quad Core CPU, 1 TB Storage space - Compute Node 32GB RAM, Intel Quad Core CPU, 2 TB Storage space - Frontend 32GB RAM, Intel Quad Core CPU, 2 TB Storage space - Compute Node 24GB RAM, Intel Quad Core CPU, 2 TB Storage space - Frontend 24GB RAM, Intel Quad Core CPU, 2 TB Storage space - Compute Node I don't make the Frontend a compute node also because it also has SaltStack Master installed and I don't want the services on the Frontend node to be slow because of virtual workload on top of them. I think it depends on what kind of services you plan to run on top of your Cloud when you choose the hardware configuration. Best, Valentin On Sun, Oct 12, 2014 at 3:23 PM, Cyrill Häfeli i...@cyle.ch wrote: Valentin Can you assist me with documentation to hardware requirements? Cyrill *Von:* Sudeep Narayan Banerjee [mailto:snbaner...@iitgn.ac.in] *Gesendet:* Friday, October 10, 2014 5:12 PM *An:* Cyrill Häfeli; Valentin Bud *Betreff:* Re: [one-users] hardware requirements Dear Cyrill, I am looping Valentin who has helped me a lot. Dear Valentin, Please help Cyrill. Regards, Sudeep On Fri, Oct 10, 2014 at 7:18 PM, Cyrill Häfeli i...@cyle.ch wrote: HI Sudeep, Thanks very helpful, but i need something written. To compare vendors. Do you know where I can found these? Best Cyrill *Von:* Sudeep Narayan Banerjee [mailto:snbaner...@iitgn.ac.in] *Gesendet:* Friday, October 10, 2014 2:51 PM *An:* Cyrill Häfeli *Betreff:* Re: [one-users] hardware requirements If you want to make it on 2 nodes(that is, one frontend other worker node) then, you can have i7 processors, 4-8GB RAM, 500GB HDD setup a bridge network. With CentOS6.x as OS The above specs I have used for my pilot project On Fri, Oct 10, 2014 at 5:45 PM, Cyrill Häfeli i...@cyle.ch wrote: Hi Sudeep I know this guide, but there is no hardware requirements included. -Cyrill *Von:* Sudeep Narayan Banerjee [mailto:snbaner...@iitgn.ac.in] *Gesendet:* Friday, October 10, 2014 2:07 PM *An:* i...@cyle.ch *Betreff:* Re: [one-users] hardware requirements PFA On Fri, Oct 10, 2014 at 4:38 PM, i...@cyle.ch wrote: Where can i find a list of hardware requirements to setup an OpenNebula Private Cloud, e.g CPU, Storage capacity, network..? -Cyrill ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Thanks Regards, Sudeep Narayan Banerjee -- Thanks Regards, Sudeep Narayan Banerjee -- Thanks Regards, Sudeep Narayan Banerjee ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Fwd: Re: Images backup
Hello Bruno, I think Carlos was referring to the xpath.rb script from inside OpenNebula [1]. And you can use that script in a bash program to achieve what you desire. You can find an example at [2]. Python is great and it's easy to hit the XML-RPC API to achieve pretty much everything you want :). [1]: https://github.com/OpenNebula/one/blob/master/src/datastore_mad/remotes/xpath.rb [2]: https://github.com/OpenNebula/one/blob/master/src/datastore_mad/remotes/fs/cp Best, Valentin On Tue, Oct 14, 2014 at 10:03 AM, Bruno Grandjean bruno.grandj...@mife90.org wrote: Hello Carlos No right to install the xpath package I am developing a python script .. Thanks you for giving to me the light best regards bruno Le 13/10/2014 15:06, Carlos Martín Sánchez a écrit : Hi, On Thu, Oct 9, 2014 at 11:35 AM, Bruno Grandjean bruno.grandj...@mife90.org wrote: Hi Carlos Thanks a lot for replying to me so quickly In fact I would like to cron the backup with a simple script: # create hot snap from the running VM $ onevm snapshot-create 68 My initial image is already persistent But how can I manage and especially delete the succesive snapshots? For instance I would like to remove the snapshots older than 1 week Is that possible to do that? thanks in advance, Bruno Each snapshot has a timestamp, so it's easy to find them in your cron script with xpath. Regards -- Carlos Martín, MSc Project Engineer OpenNebula - Flexible Enterprise Cloud Made Simple www.OpenNebula.org http://www.opennebula.org/ | cmar...@opennebula.org | @OpenNebula http://twitter.com/opennebula ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Run bash script in vm on spin-up
Hello Kerry, Do you have a registered image of type CONTEXT with the name of `test.sh` in the files datastore? The file you specify in FILES_DS can be found in the contextualization CDROM on the VM (/dev/disk/by-label/CONTEXT). The following would run a `test.sh` script when the VM is spun up at the end of the contextualization routine [1]. CONTEXT = [ FILES_DS=$FILE[IMAGE=\test.sh\], INIT_SCRIPTS=test.sh, ... ] [1]: https://github.com/OpenNebula/addon-context-linux/blob/master/base/etc/one-context.d/99-execute-scripts Best, Valentin On Mon, Sep 8, 2014 at 11:09 PM, kerryhall . kerryh...@gmail.com wrote: Thanks! I'm still having issues here unfortunately. I tried putting: FILES_DS=$FILE[IMAGE=\test.sh\] into my template context section, but I get: User 0 does not own an image with name: test.sh I'm not trying to include an image, I just want test.sh (a file in my file datastore) to get copied to anywhere on my vm's filesystem. (And eventually, I want test.sh to get run on vm creation, or failing that, every time the vm starts) Thanks!! On Fri, Jul 25, 2014 at 11:18 PM, Valentin Bud valentin@gmail.com wrote: Hello Kerry, Under Defining Context [1] there is an example how to use FILES_DS. FILES_DS=$FILE[IMAGE=\test.sh\] [1]: http://docs.opennebula.org/4.6/user/virtual_machine_setup/cong.html Best, Valentin On Fri, Jul 25, 2014 at 11:29 PM, kerryhall . kerryh...@gmail.com wrote: Hi folks, I am trying to run a bash script on a vm as it gets spun up. I've read: http://docs.opennebula.org/4.6/user/virtual_machine_setup/cong.html but there isn't too much to go on there. I have created test.sh and put it into the files datastore on the head node. The issue I am having is that the syntax in the Defining Context section of http://docs.opennebula.org/4.6/user/virtual_machine_setup/cong.html is ambiguous, specifically the files_ds section. I have tried: FILES_DS=$FILE[\test.sh\] and FILES_DS=/var/lib/one/datastores/2/test.sh As a first step, I'm just trying to get this file included in my vm at all. Thanks! Kerry ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] BOOTPROTO, DHCP_HOSTNAME, and vm IPs in Sunstone
Hello Kerry, I am unsure how to use a DHCP client to suggest an IP to the DHCP server, or if DHCP supports this. The DHCP client can be configured to request a specific IP address from the DHCP server [1]. [1]: http://superuser.com/questions/487607/how-to-request-a-specific-ip-address-from-dhcp-server Best, Valentin Thanks!! Kerry On Mon, Sep 8, 2014 at 1:42 AM, Javier Fontan jfon...@opennebula.org wrote: The context packages are meant to be used with static networking. To add that parameters to network configuration you can do one of these things: * Create the network configuration in the base images manually and do not set network contextualization in OpenNebula so they are not overwritten. * Modify the context packages to add those parameters [1] For the third thing there is no way to do it. OpenNebula gets an IP from the network pool and assigns it to the NIC. There is no way to change it after it is selected and there is no external method of selecting the IP. What you can do is configure DHCP so it picks the same IP as OpenNebula had selected for the VM. The MAC addresses are generated from the MAC prefix and the IP, you can configure DHCP with those mac/ip pairs. MAC = PREFIX + IP in hex 02:00:0a:00:00:01 = 02:00 + 10.0.0.1 (in hex is 0a:00:00:01) [1] https://github.com/OpenNebula/addon-context-linux/blob/master/base_rpm/etc/one-context.d/00-network#L103-L114 On Sat, Sep 6, 2014 at 12:23 AM, kerryhall . kerryh...@gmail.com wrote: Hi folks, I have a small ONE cluster that I am currently setting up on 4.8. I have a ethernet network model, and I have added the following line to my template: SET_HOSTNAME=$NAME.mydomain.int So far so good, but I need to be able to do the following three things: 1. set BOOTPROTO=dhcp in /etc/sysconfig/network-scripts/ifcfg-eth0 on new vms 2. set DHCP_HOSTNAME=$NAME.mydomain.int on new vms 3. set the IP field in sunstone to vm's IP address provided by DHCP How do I accomplish these items? I was thinking run a bash script on vm startup for items 1 and 2, unless there is a builtin ONE way to do this, but what about item 3? Is there just a straight up DHCP networking model I can use to make this easier? Does anyone currently use ONE with DHCP? Thanks!! Kerry ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Javier Fontán Muiños Developer OpenNebula - Flexible Enterprise Cloud Made Simple www.OpenNebula.org | @OpenNebula | github.com/jfontan ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] unknown ethernet controller on windows imported image
Hello Luca, I think you must add MODEL=E1000 to the NIC section of the template. You can read about it in the documentation [1]. [1]: http://docs.opennebula.org/4.8/user/references/template.html#network-section Best, Valentin On Tue, Sep 9, 2014 at 3:46 PM, Luca Uburti lubu...@ricca-it.com wrote: of course, here it is VM ID32/ID UID0/UID GID0/GID UNAMEoneadmin/UNAME GNAMEoneadmin/GNAME NAME123/NAME PERMISSIONS OWNER_U1/OWNER_U OWNER_M1/OWNER_M OWNER_A0/OWNER_A GROUP_U0/GROUP_U GROUP_M0/GROUP_M GROUP_A0/GROUP_A OTHER_U0/OTHER_U OTHER_M0/OTHER_M OTHER_A0/OTHER_A /PERMISSIONS LAST_POLL1410266708/LAST_POLL STATE3/STATE LCM_STATE3/LCM_STATE RESCHED0/RESCHED STIME1410189882/STIME ETIME0/ETIME DEPLOY_IDone-32/DEPLOY_ID MEMORY742400/MEMORY CPU3/CPU NET_TX0/NET_TX NET_RX0/NET_RX TEMPLATE AUTOMATIC_REQUIREMENTS![CDATA[!(PUBLIC_CLOUD = YES)]]/AUTOMATIC_REQUIREMENTS CONTEXT DISK_ID![CDATA[1]]/DISK_ID ETH0_DNS![CDATA[8.8.8.8]]/ETH0_DNS ETH0_GATEWAY![CDATA[10.222.1.254]]/ETH0_GATEWAY ETH0_IP![CDATA[10.222.1.1]]/ETH0_IP ETH0_MAC![CDATA[02:00:0a:de:01:01]]/ETH0_MAC ETH0_MASK![CDATA[255.255.255.0]]/ETH0_MASK ETH0_NETWORK![CDATA[10.222.1.0]]/ETH0_NETWORK NETWORK![CDATA[YES]]/NETWORK TARGET![CDATA[hdb]]/TARGET /CONTEXT CPU![CDATA[0.5]]/CPU DISK CLONE![CDATA[YES]]/CLONE CLONE_TARGET![CDATA[SYSTEM]]/CLONE_TARGET DATASTORE![CDATA[default]]/DATASTORE DATASTORE_ID![CDATA[1]]/DATASTORE_ID DEV_PREFIX![CDATA[hd]]/DEV_PREFIX DISK_ID![CDATA[0]]/DISK_ID IMAGE![CDATA[w2k8r2]]/IMAGE IMAGE_ID![CDATA[13]]/IMAGE_ID IMAGE_UNAME![CDATA[oneadmin]]/IMAGE_UNAME LN_TARGET![CDATA[NONE]]/LN_TARGET READONLY![CDATA[NO]]/READONLY SAVE![CDATA[NO]]/SAVE SIZE![CDATA[20585]]/SIZE SOURCE![CDATA[/vmfs/volumes/1/a84b8296fb47ece716d79137f273e014]]/SOURCE TARGET![CDATA[hda]]/TARGET TM_MAD![CDATA[vmfs]]/TM_MAD TYPE![CDATA[FILE]]/TYPE /DISK GRAPHICS LISTEN![CDATA[0.0.0.0]]/LISTEN PORT![CDATA[5932]]/PORT TYPE![CDATA[VNC]]/TYPE /GRAPHICS MEMORY![CDATA[1024]]/MEMORY NIC AR_ID![CDATA[0]]/AR_ID BRIDGE![CDATA[vSwitch0]]/BRIDGE IP![CDATA[10.222.1.1]]/IP MAC![CDATA[02:00:0a:de:01:01]]/MAC NETWORK![CDATA[dynamic_vmware_net]]/NETWORK NETWORK_ID![CDATA[2]]/NETWORK_ID NETWORK_UNAME![CDATA[oneadmin]]/NETWORK_UNAME NIC_ID![CDATA[0]]/NIC_ID VLAN![CDATA[YES]]/VLAN VLAN_ID![CDATA[340]]/VLAN_ID /NIC TEMPLATE_ID![CDATA[3]]/TEMPLATE_ID VCPU![CDATA[2]]/VCPU VMID![CDATA[32]]/VMID /TEMPLATE USER_TEMPLATE/ HISTORY_RECORDS HISTORY OID32/OID SEQ0/SEQ HOSTNAME10.10.10.236/HOSTNAME HID4/HID CID-1/CID STIME1410189893/STIME ETIME0/ETIME VMMMADvmware/VMMMAD VNMMADvmware/VNMMAD TMMADvmfs/TMMAD DS_LOCATION/vmfs/volumes/DS_LOCATION DS_ID0/DS_ID PSTIME1410189893/PSTIME PETIME1410190981/PETIME RSTIME1410190981/RSTIME RETIME0/RETIME ESTIME0/ESTIME EETIME0/EETIME REASON0/REASON ACTION0/ACTION /HISTORY /HISTORY_RECORDS /VM Il 09/09/2014 14:40, Tino Vazquez ha scritto: Hi Luca, I fail to see the model type in the deployment file, it should have something like the following: interface type='bridge' model type='e1000'/ /interface can you share the VM template (onevm show -x vid) as well? Best, -Tino -- OpenNebula - Flexible Enterprise Cloud Made Simple -- Constantino Vázquez Blanco, PhD, MSc Senior Infrastructure Architect at C12G Labs www.c12g.com | @C12G | es.linkedin.com/in/tinova -- Confidentiality Warning: The information contained in this e-mail and any accompanying documents, unless otherwise expressly indicated, is confidential and privileged, and is intended solely for the person and/or entity to whom it is addressed (i.e. those identified in the To and cc box). They are the property of C12G Labs S.L.. Unauthorized distribution, review, use, disclosure, or copying of this communication, or any part thereof, is strictly prohibited and may be unlawful. If you have received this e-mail in error, please notify us immediately by e-mail at ab...@c12g.com and delete the e-mail and attachments and any copy from your system. C12G thanks you for your cooperation. On 9 September 2014 12:52, Luca Uburti lubu...@ricca-it.com wrote: Hello, I have attached those two files thank you Il 09/09/2014 12:22, Tino Vazquez ha scritto: Hi Luca, Ok, let's see if we can get to the bottom of this issue. Could you provide the deployment file of the VM? It is placed in the front-end, under /var/lib/one/datastores/ds_id/deployment.0, where
Re: [one-users] 2 problems in VMs
Hello Sudeep, I don't know what version of OpenNebula are you using. I have tested on my setup, which uses 4.4. The following might not apply in your case. See inline. [oneadmin@front ~]$ oneimage chmod 33 664 [oneadmin@front ~]$ oneimage show 33 IMAGE 33 INFORMATION ID : 33 NAME : win7_sys USER : oneadmin GROUP : oneadmin DATASTORE : default TYPE : OS REGISTER TIME : 07/30 14:49:49 PERSISTENT : No SOURCE : /var/lib/one//datastores/1/0f72c7f0cd7fdc78f53b4fdbd2edb008 PATH : /home/wind.qcow2 SIZE : 9G STATE : rdy RUNNING_VMS: 0 PERMISSIONS OWNER : um- GROUP : u-- OTHER : --- IMAGE TEMPLATE DEV_PREFIX=hd DRIVER=qcow2 According to the above output it seems like oneimage chmod didn't change the permissions. I think they should resemble the following: PERMISSIONS OWNER : um- GROUP : um- OTHER : u-- Can you try chmod-ing again, if it still doesn't work maybe it's a bug. Best, Valentin ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Run bash script in vm on spin-up
Hello Kerry, Under Defining Context [1] there is an example how to use FILES_DS. FILES_DS=$FILE[IMAGE=\test.sh\] [1]: http://docs.opennebula.org/4.6/user/virtual_machine_setup/cong.html Best, Valentin On Fri, Jul 25, 2014 at 11:29 PM, kerryhall . kerryh...@gmail.com wrote: Hi folks, I am trying to run a bash script on a vm as it gets spun up. I've read: http://docs.opennebula.org/4.6/user/virtual_machine_setup/cong.html but there isn't too much to go on there. I have created test.sh and put it into the files datastore on the head node. The issue I am having is that the syntax in the Defining Context section of http://docs.opennebula.org/4.6/user/virtual_machine_setup/cong.html is ambiguous, specifically the files_ds section. I have tried: FILES_DS=$FILE[\test.sh\] and FILES_DS=/var/lib/one/datastores/2/test.sh As a first step, I'm just trying to get this file included in my vm at all. Thanks! Kerry ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] How use $UNAME
Hi Alexandr, It is defined but not automatically included. You have to configure it in the template. CONTEXT=[ UNAME=$UNAME ] You can, of course name the variable inside the CONTEXT as you wish. I am using ONE 4.4 and the above applies to this version, I don't know if 4.6 includes the pre-defined variables by default. Best, Valentin On Thu, Jul 24, 2014 at 1:34 PM, Alexandr Baranov telecastcl...@gmail.com wrote: Hi, I'm trying to setup kerberos login to log into VMs with kerberos username matching ONE username. I'm going to use UNAME variable and ONE documentation states it as pre-defined. So the question is: do I need to manually specify UNAME inside the VM template, or is it automatically defined and passed to contextualization scripts? ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] [VirtualMachineAllocate] Error allocating a new virtual machine. Only one CONTEXT attribute can be defined.
Hi Alexandr, Can you post your template please. Have you somehow defined two CONTEXT arrays inside the template? Bad Request : [VirtualMachineAllocate] Error allocating a new virtual machine. Only one CONTEXT attribute can be defined. The above makes me think you did. Best, Valentin On Thu, Jul 24, 2014 at 4:37 PM, Alexandr Baranov telecastcl...@gmail.com wrote: Hi, I have problems to create new virtual machines with rocci although i can create new virtual machines through sunstone web interface. Here's the output: $ occi --endpoint rOOCI endpoint --action create --resource compute --mixin os_tpl#uuid_kvm_scientific_6_x86_64_1cpu_1gb_ram_clstkvm_155 --mixin resource_tpl#small --attribute occi.core.title=uuid_sl65_cvmfs_cloudinit_276_cloudinit_1406193329 --output-format json --auth basic --username USER --password passwrd F, [2014-07-24T17:25:32.944541 #28565] FATAL -- : [rOCCI-cli] An error occurred! Message: HTTP POST with ID[72735417-9542-4e74-b04d-7d0c09b6c525] failed! HTTP Response status: [400] Bad Request : [VirtualMachineAllocate] Error allocating a new virtual machine. Only one CONTEXT attribute can be defined. What could be the problem? ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] [VirtualMachineAllocate] Error allocating a new virtual machine. Only one CONTEXT attribute can be defined.
Looks good to me Alexandr. It seems it's another problem somewhere else. I have never used rocci so I am unable to help you further :|. Maybe others will. Best, Valentin On Thu, Jul 24, 2014 at 4:55 PM, Alexandr Baranov telecastcl...@gmail.com wrote: My template: NIC=[NETWORK=36-kvm,NETWORK_UNAME=oneadmin] MEMORY=1024 CPU=1 DISK=[IMAGE=SL65-cvmfs-cloudinit,IMAGE_UNAME=$IMAGE_UNAME] GRAPHICS=[TYPE=VNC,LISTEN=0.0.0.0] CONTEXT=[SSH_PUBLIC_KEY=ssh-rsa [KEY],NETWORK=YES] RAW=[TYPE=kvm] OS=[ARCH=x86_64,BOOT=hd] 24.07.2014 17:46 пользователь Valentin Bud valentin@gmail.com написал: Hi Alexandr, Can you post your template please. Have you somehow defined two CONTEXT arrays inside the template? Bad Request : [VirtualMachineAllocate] Error allocating a new virtual machine. Only one CONTEXT attribute can be defined. The above makes me think you did. Best, Valentin On Thu, Jul 24, 2014 at 4:37 PM, Alexandr Baranov telecastcl...@gmail.com wrote: Hi, I have problems to create new virtual machines with rocci although i can create new virtual machines through sunstone web interface. Here's the output: $ occi --endpoint rOOCI endpoint --action create --resource compute --mixin os_tpl#uuid_kvm_scientific_6_x86_64_1cpu_1gb_ram_clstkvm_155 --mixin resource_tpl#small --attribute occi.core.title=uuid_sl65_cvmfs_cloudinit_276_cloudinit_1406193329 --output-format json --auth basic --username USER --password passwrd F, [2014-07-24T17:25:32.944541 #28565] FATAL -- : [rOCCI-cli] An error occurred! Message: HTTP POST with ID[72735417-9542-4e74-b04d-7d0c09b6c525] failed! HTTP Response status: [400] Bad Request : [VirtualMachineAllocate] Error allocating a new virtual machine. Only one CONTEXT attribute can be defined. What could be the problem? ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] DHCP
Hello Kerry, When you say DNS and DHCP are tied into one system on my network you mean your DNS server is configured with a dynamic zone that is updated from the DHCP server? If that is so, I guess you want a VM to acquire a lease from the DHCP server equal to the IP given by OpenNebula to that specific VM. This way it maintains consistency. If you populate CONTEXT with NETWORK=YES, the contextualization packages will do the job of configuring the network. As far as I know it's not possible to tell the contextualization packages to leave the network unconfigured. So using this approach would not get you where you want. One way of achieving what you desire is to program a hook that inserts a host in the DHCP server via OMAPI [1]. From the hook you have access to the entire template. I am assuming you are using ISC DHCP. This would solve the consistency issue. Now the VM needs to be configured for DHCP as you mentioned in your previous E-Mail. For that you can define SET_HOSTNAME in the CONTEXT to have it available in the VM, create a simple (bash) program that reads it and sets ifcfg-eth0 accordingly. Put the script in the files datastore and update the context to use it via FILES_DS and INIT_SCRIPTS. For example you name your super script set_dhcp.sh. You can insert in the files datastore [2] using `oneimage create -d files --name set_dhcp.sh --type CONTEXT /path/to/set_dhcp.sh`. Update the template to use it. CONTEXT=[ FILES_DS=$FILE[IMAGE=\set_dhcp.sh\, INIT_SCRIPTS=set_dhcp.sh] This way the vmcontext init.d script from the contextualization packages would skip the network configuration, note NETWORK=YES is missing and it would run the set_dhcp.sh script. One more thing that comes to mind, SET_HOSTNAME would instruct the vmcontext init script to set the hostname and maybe that's not what you want, maybe you want your VM to set it's hostname from DHCP. In this case use another variable, like DHCP_HOSTNAME in CONTEXT and process it in set_dhcp.sh to configure ifcfg-eth0. You can set DHCP_HOSTNAME=$NAME inside the template. This way the name you give the VM in Sunstone would end up being used. Take the above with a grain of salt, I am sure there are other (better) ways of achieving what you desire. @Diego could you please point me where have you find DHCP in the VNET template? I have looked over the docs and couldn't find it. [1]: http://ipamworldwide.com/dhcp-api.html [2]: http://docs.opennebula.org/4.6/administration/storage/file_ds.html Best, Valentin On Wed, Jul 23, 2014 at 10:45 PM, kerryhall . kerryh...@gmail.com wrote: I have managed to figure out how to edit my template (onetemplate update TEMPLATE_ID) so I can now set the hostname. However, DNS and DHCP are tied into one system on my network, so I can't perform a dig on my new server and get the IP back. I just need to know what lines to add to my template file to get my vms to use DHCP only. I'm sure it's something really simple, I just can't find any info here: http://docs.opennebula.org/4.6/user/virtual_machine_setup/cong.html If you know how to get DHCP working, please let me know. Thank you!! On Tue, Jul 22, 2014 at 3:48 PM, kerryhall . kerryh...@gmail.com wrote: Hi folks, I want to configure Open Nebula to contextualize two items for ifcfg-eth0. BOOTPROTO=dhcp and DHCP_HOSTNAME=parameter passed in via ONE web interface, even just vm-name would work I need no other networking settings. DHCP on my network handles the rest. I have tried manually setting these two parameters in ifcfg-eth0 and it looks like it works. (Although the new IP given via DHCP doesn't match the original IP) Is this possible to do in ONE? If so, where do I start? Thanks! Kerry ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] VNC proxy is working but Sustone VNC window does show error
Hello Paweł, Maybe your issue is somehow connected to the Same-Origin-Policy discussed in the following thread: http://lists.opennebula.org/pipermail/users-opennebula.org/2014-February/026405.html Best, Valentin On Wed, Jul 23, 2014 at 4:16 PM, Daniel Molina dmol...@opennebula.org wrote: On 22 July 2014 16:44, pawel.orzechow...@budikom.net wrote: I had similar situation. When I switched on secure websockets connection in my profile settings I could not open VNC connection to any virtual machine. Isn't it a bug? What is the error message that is shown in Susntone? Cheers Pawel I have found my mistake! It was silly - Some time ago I pressed Secure websockets connection chekbox in configuration tab and forgot about it. So now it's working! THANK YOU. 23.08.2012, 17:17, Пярн Артур dekkart at yandex.ru http://lists.opennebula.org/listinfo.cgi/users-opennebula.org: * If I go direclty from browser to http://10.18.1.7:35792/ http://10.18.1.7:35792/ ** it shows me in websockify output the following message: Normal web request received but disallowed ** 23.08.2012, 16:54, Пярн Артур dekkart at yandex.ru http://lists.opennebula.org/listinfo.cgi/users-opennebula.org: ** Just in case, my configuration: ** 10.2.2.2 - KVM host with VMs ** 10.18.1.7 - only OpenNebula and Sunstone front-end ** * Can you connect directly without no VNC to those VMs (10.2.2.2:5916 http://10.2.2.2:5916 for example) with a standard vnc client ** Yes, with no problem ** If I start proxy manualy /srv/cloud/one/share/noVNC/utils/websockify 35792 10.2.2.2:5916 http://10.2.2.2:5916, I can't connetct from VNC client to 10.18.1.7:35792 http://10.18.1.7:35792, it shows me ignoring socket not ready: ** root at ubuntu http://lists.opennebula.org/listinfo.cgi/users-opennebula.org:/srv/cloud/one/share/noVNC/utils# /srv/cloud/one/share/noVNC/utils/websockify 35792 10.2.2.2:5916 http://10.2.2.2:5916 ** WARNING: no 'numpy' module, HyBi protocol is slower or disabled ** WebSocket server settings: **- Listen on :35792 **- Flash security policy server **- No SSL/TLS support (no cert file) **- proxying from :35792 to 10.2.2.2:5916 http://10.2.2.2:5916 **1: 10.2.0.3 http://10.2.0.3: ignoring socket not ready **2: 10.2.0.3 http://10.2.0.3: ignoring socket not ready ** I found that I can connect to 10.18.1.7 http://10.18.1.7: 35792 from browser (http://10.18.1.7:6080/vnc.html http://10.18.1.7:6080/vnc.html) when I start both launch.sh script and also websockify 35792 10.2.2.2:5916 http://10.2.2.2:5916 ** 23.08.2012, 16:20, Hector Sanjuan hsanjuan at opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org: ** Your working screenshot shows a VNC session to a VM deployed in the same ** host as the sunstone frontend (10.18.1.7:6080 http://10.18.1.7:6080), but from sunstone you are ** trying to open connections to VMs in a different host (10.2.2.2). My ** questions then, ** * Can you connect directly without no VNC to those VMs (10.2.2.2:5916 http://10.2.2.2:5916 for ** example) with a standard vnc client ** * If you manually launch the proxy with the webserver option like you did, ** would something like ** http://10.18.1.7:6080/vnc.html?host=10.2.2.2port=5916 http://10.18.1.7:6080/vnc.html?host=10.2.2.2port=5916 work? This should ** not work if sunstone doesnt. Perhaps you can get some useful info from the ** websockify stdout when attempting it if it doesnt work. ** (in response to your latest email, dont worry about the timeout thing) ** Hector ** En Thu, 23 Aug 2012 14:08:05 +0200, Пярн Артур dekkart at yandex.ru http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ** escribió: **Sure, logs below. sunstone.error is empty. It's our DEMO network, no **firewall, so no way. **-- ** Server configuration **-- **{:vnc_proxy_path=/srv/cloud/one/share/noVNC/utils/websockify, ** :auth=sunstone, ** :vnc_proxy_key=nil, ** :vnc_proxy_support_wss=false, ** :debug_level=3, ** :vnc_proxy_base_port=29876, ** :host=0.0.0.0, ** :port=, ** :one_xmlrpc=http://localhost:2633/RPC2 http://localhost:2633/RPC2, ** :core_auth=cipher, ** :lang=en_US, ** :vnc_proxy_cert=nil} **== Sinatra/1.3.2 has taken the stage on for development with backup **from Thin **Thu Aug 23 12:04:43 2012 [I]: 10.2.0.3 - - [23/Aug/2012 12:04:43] GET / **HTTP/1.1 200 1595 0.0075 **Thu Aug 23 12:04:43 2012 [I]: 10.2.0.3 - - [23/Aug/2012 12:04:43] GET **/css/login.css HTTP/1.1 304 - 0.0022 **Thu Aug 23 12:04:43 2012 [I]: 10.2.0.3 - - [23/Aug/2012 12:04:43] GET **/vendor/jQueryUI/jquery-ui-1.8.16.custom.css HTTP/1.1 304 - 0.0023 **Thu Aug 23 12:04:43 2012 [I]: 10.2.0.3 - - [23/Aug/2012
Re: [one-users] OVH (So You Start) + OpenNebula + 1 IP
Hello Pablo, On Fri, Jul 18, 2014 at 6:53 AM, Pablo Hinojosa Nava pablo...@gmail.com wrote: Hi all, I am having problems trying to set up an OpenNebula host to virtualize with KVM. I have bought a dedicated server with OVH (So You Start) with 1 publlc IP. The public IP is on eth0 interface and each time I try to create a Linux bridge (as is suggested during the installation http://docs.opennebula.org/4.6/design_and_installation/quick_starts/qs_ubuntu_kvm.html#configure-the-network ) I got frozen the SSH connection. If I edit the file /etc/network/interfaces, then I run /etc/init.d/networking restart and it seems the server does not do anything. If I restart the server with changes made, I cannot recover the connection and I have to reinstall the operating system. If I try to create the bridge by hand, I the same problem (connection lost) when I try to add the bridge to the public network interface (etc0). It seems OVH use Mac Filtering switches, so that could be the reason. I could buy a block of 8 or 16 IPs, but it does not change way to configure OpenNebula. If OVH is using MAC Filtering than you cannot bridge eth0 unless you create the bridge using the same MAC address as the one eth0 has. Never tried so I just suppose it could work. You have at least three choices, you can either NAT a private class over eth0 and define that network in OpenNebula, you can acquire a /29 block, create a bridge, set the first public IP address on it and define the network in OpenNebula, or a combination of both. You can choose to use Linux bridge or OpenvSwitch. Either of the two are really easy to set up and running. OpenNebula has network drivers for both of them. I would choose OpenvSwitch :). Best, Valentin ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] /var/lib/one/remotes/ or /var/tmp/one
Hello Thomas, The probes that leave under /var/lib/one/remotes are copied by OpenNebula on the compute node when you create the host. They are also synced whenever you want them to with `onehost sync`. By default OpenNebula stores the copy on the compute nodes in /var/tmp/one. This is documented in the Managing Hosts [1] documentation. I hope it help you. [1]: http://docs.opennebula.org/4.6/administration/hosts_and_clusters/host_guide.html#onehost-command Best, Valentin On Fri, Jul 11, 2014 at 11:29 AM, Thomas Stein himbe...@meine-oma.de wrote: Hello. Just wondering. What is the correct location for the run_probes commands? They seemed to be installed to /var/lib/one/remotes/ but the searched for by oned in /var/tmp/one. best regards t. ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] VM Failed with Windows OS
Hello Sudeep, I have never used virt-install but I doubt it install the VirtIO drivers [1] by itself inside the Windows 7 image. Without those drivers the virtio doesn't work inside the machine, in the sense that Windows Installer doesn't see any HDD present. Can you boot the VM on your local machine using qemu? [1]: http://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers Best, Valentin On Wed, Jul 9, 2014 at 10:15 PM, Sudeep Narayan Banerjee snbaner...@iitgn.ac.in wrote: Hello Valentine, I followed the steps in and could install Win7 using virt-install command. Win7 OS gets installed in 7GB HDD and I could see Win7 desktop using virt-manager. Then I power-off the machine and create the image, template VM successfully from oneadmin prompt. But, from vncviewer in OpenNebula GUI, it says, Booting from Hard-Disk. Inside the VM, I am not able to see the desktop! - Following steps I did: 1. [oneadmin@front ~]$ cd /var/lib/libvirt/images/ [oneadmin@front images]$ qemu-img create -f qcow2 -o preallocation=metadata /storage/local/images/win7.qcow2 8G Formatting '/var/lib/libvirt/images/win7.qow2', fmt=qcow2 size=8589934592 encryption=off cluster_size=65536 preallocation='metadata' [oneadmin@front images]$ qemu-img info win7.qcow2 image: win7.qcow2 file format: raw virtual size: 8.0G (8589934592 bytes) disk size: 392K cluster_size: 65536 [oneadmin@front images]$ qemu-img convert -f raw -O qcow2 win7.qcow2 win7.qcow2 [oneadmin@front images]$ qemu-img info win7.qcow2 image: win7.qcow2 file format: qcow2 virtual size: 8.0G (8589934592 bytes) disk size: 392K cluster_size: 65536 [oneadmin@front images]$ chown qemu.qemu win7.qcow2 [oneadmin@front images]$ virt-install --connect qemu:///system --name win7 --ram 1024 --vcpus 2 --disk path=/var/lib/libvirt/images/win7.qcow2,size=8,format=qcow2,bus=virtio,cache=none --cdrom /home/Windows-7-x86.iso --vnc --os-type=windows --os-variant=win7 -noautoconsole --accelerate --noapic --keymap=en-us Finally, [oneadmin@front ~]$ onevm list ID USER GROUPNAMESTAT UCPUUMEM HOST TIME 25 oneadmin oneadmin myvm1 runn0512M nc1 11d 16h14 Regards, Sudeep On Tue, Jul 8, 2014 at 9:13 PM, Valentin Bud valentin@gmail.com wrote: Hello Sudeep, Note that virt-install doesn't care that your image is already created, it overwrites it and changes the format to raw. Take a look over this article [1], on how to pass the format of the image to virt-install. Or just qemu-img convert your raw image to qcow2. [1]: http://opennodecloud.com/documentation/howtos/kvm-guests-virt-install-examples/ Best, Valentin On Tue, Jul 8, 2014 at 6:24 PM, Sudeep Narayan Banerjee snbaner...@iitgn.ac.in wrote: Dear Javier, You are true. It is raw indeed. But I made it like qcow2! Below is the command I executed for qcow2. [oneadmin@front images]$ qemu-img create -f qcow2 -o preallocation=metadata /storage/local/images/ winserv1.qcow2 8G Formatting '/var/lib/libvirt/images/win7.qow2', fmt=qcow2 size=8589934592 encryption=off cluster_size=65536 preallocation='metadata' [oneadmin@front images]$ qemu-img info /var/lib/libvirt/images/win7.qcow2 image: /var/lib/libvirt/images/win7.qcow2 file format: raw virtual size: 8.0G (8589934592 bytes) disk size: 4.9G So, how to fix it?!? I have been trying to make a VM in windows for past 10days but still not able to make it! :-( Regards, Sudeep On Tue, Jul 8, 2014 at 8:45 PM, Javier Fontan jfon...@opennebula.org wrote: I believe that the image is not a qcow2 image but raw. You can check the format with: $ qemu-img info /var/lib/libvirt/images/win7.qcow2 On Tue, Jul 8, 2014 at 4:16 PM, Sudeep Narayan Banerjee snbaner...@iitgn.ac.in wrote: Dear Sir, I have been trying to install Windows-7 64bit in OpenNebula. Our target is to create a VM with Windows as OS. Below are the steps that I followed. But not able to runn the VM. SeLinux is Disabled in both servers. [root@front ~]# getenforce-- Frontend Disabled [root@nc1 ~]# getenforce -- Worker Node Disabled [oneadmin@front ~]$ cd /var/lib/libvirt/images/ [oneadmin@front images]$ qemu-img create -f qcow2 -o preallocation=metadata /storage/local/images/winserv1.qcow2 8G Formatting '/var/lib/libvirt/images/win7.qow2', fmt=qcow2 size=8589934592 encryption=off cluster_size=65536 preallocation='metadata' [oneadmin@front images]$ ls -lrth total 5.0G -rw--- 1 oneadmin oneadmin 8.0G Jul 8 18:23 win7.qcow2 [oneadmin@front images]$ exit [root@front ~]# virt-install --prompt What is the name of your virtual machine? myvm7 How much RAM should be allocated (in megabytes)? 1024 What would you like to use as the disk (file path)? /var/lib/libvirt/images/win7.qcow2 How large would you like
Re: [one-users] VM Failed with Windows OS
-kvm: -drive file=/var/lib/one/datastores/0/36/disk.0,if=none,id=drive-ide0-0-0,format=qcow2: could not open disk image /var/lib/one/datastores/0/36/disk.0: Invalid argument Tue Jul 8 18:35:32 2014 [VMM][I]: Tue Jul 8 18:35:32 2014 [VMM][E]: Could not create domain from /var/lib/one/datastores/0/36/deployment.0 Tue Jul 8 18:35:32 2014 [VMM][I]: ExitCode: 255 Tue Jul 8 18:35:32 2014 [VMM][I]: Failed to execute virtualization driver operation: deploy. Tue Jul 8 18:35:32 2014 [VMM][E]: Error deploying virtual machine: Could not create domain from /var/lib/one/datastores/0/36/deployment.0 Tue Jul 8 18:35:33 2014 [DiM][I]: New VM state is FAILED -- Thanks Regards, Sudeep Narayan Banerjee ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Javier Fontán Muiños Developer OpenNebula - Flexible Enterprise Cloud Made Simple www.OpenNebula.org | @OpenNebula | github.com/jfontan -- Thanks Regards, Sudeep Narayan Banerjee ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Valentin Bud http://databus.pro | valen...@databus.pro ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Need https for Opennebula URL in browser
configuration snippet in httpd/conf.d: VirtualHost *:443 ServerName default-ssl ## Vhost docroot DocumentRoot /usr/lib/one/sunstone/public ## Directories, there should at least be a declaration for /usr/lib/one/sunstone/public Directory /usr/lib/one/sunstone/public Options -MultiViews AllowOverride None Order allow,deny Allow from all /Directory ## Logging ErrorLog /var/log/httpd/default-ssl_error_ssl.log LogLevel warn ServerSignature Off CustomLog /var/log/httpd/default-ssl_access_ssl.log combined ## SSL directives SSLEngine on SSLCertificateFile crt file SSLCertificateKeyFile key file SSLCACertificatePath/etc/ssl/certs SSLCACertificateFilebundle file FilesMatch \.(cgi|shtml|phtml|php)$ SSLOptions +StdEnvVars /FilesMatch /VirtualHost hth, Martin On 26 Jun 2014, at 14:51, Sudeep Narayan Banerjee snbaner...@iitgn.ac.in wrote: Dear Sirs, Is there any update on the same? Thank you in advance! S N Banerjee On Thu, Jun 26, 2014 at 1:46 AM, Sudeep Narayan Banerjee snbaner...@iitgn.ac.in wrote: Dear Sir, Firstly I would like thank for the simple solution provided for the thread [one-users] VM in opennebula failing. Now I would like to make it route through SSL at 443 port. I checked at your site and could find the steps meant for Ubuntu, hope checked properly! Is it possible for CentOS6.5 x86_64 ? Thanks in advance! Sudeep -- Thanks Regards, Sudeep Narayan Banerjee -- Thanks Regards, Sudeep Narayan Banerjee ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Thanks Regards, Sudeep Narayan Banerjee -- Thanks Regards, Sudeep Narayan Banerjee ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Valentin Bud http://databus.pro | valen...@databus.pro ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] greetings
Hello Galimba, I would like to kindly welcome you to the magic world of Cloud Computing :). I think your decision to use OpenNebula for your needs was a wise one. A road filled with fun, amusement and sometimes frustrations lay ahead. Enjoy. When I've first read your E-Mail I thought at exactly the same solution as the one pointed out by you, connect to the firewall and modify the iptables rules. I would choose to modify them via a hook [1] because I don't like to mangle with deploy. You might ask why is that? In case of update in the future you don't have to worry that your deploy script gets overwritten. Another safe option would be to copy the whole virtualization manager and name it kvm-local and modify the deploy script there and update the hosts to use that driver. Another solution that came to mind is to define a pseudo public network in ONE using a desired private range. Then map the last octet from you public range with the one in this private range. Easier to remember, though your users might not agree. I think it's easier if I write an example. Public: X.Y.Z.*100* Private: 172.16.0.*100* On the firewall you would have to DNAT each of those one 100 IP addresses to each of those private ones. You would have to do this once. For speed you can generate the rules with a basic for. Next step would be to hold [2] all the IPs from the private network (pseudo public) that you don't have available in the Public range. Not elegant, not user friendly but a (working) solution non the less. The most elegant solution I am aware of would be to create a VLAN subinterface for that /25 range on the firewall and configure a true public network inside ONE. It could even be done with bridging only without the hassle of setting up VLANs. But you need to be able to partition your network in this manner. It might not work for you. You're challenge is a really interesting one and I would like to hear other people opinions and possible solutions. It gave me food for thought and I am grateful for that. [1]: http://docs.opennebula.org/4.4/integration/infrastructure_integration/hooks.html [2]: http://docs.opennebula.org/4.4/user/virtual_resource_management/vgg.html Best, Valentin On Fri, Jun 20, 2014 at 12:27 AM, Galimba gali...@gmail.com wrote: Hello everyone. My name is Sebastian. I'm new to this list and tho I've been a sysadmin for several years now, I've only recently dived into Cloud Computing. I have successfully installed OpenNebula 4.4 on a local computer behind a firewall at my university. I set up two nodes and another dedicated computer as a NFS datastore. The plan is to provide my research group with the IAAS that OpenNebula brings to the table. At the moment, I'm dealing with an issue I haven't been able to solve, and perhaps some of you could throw me a hint. My university assigned me over 100 public ip addresses to provide each VM. If I were to plug the cable directly to the OpenNebula box, then I know I could create my templates with public ip addresses and then everything should be fine. The problem is that I have a firewall in the middle, managing all the public ips, and my OpenNebula box is on a LAN behind that firewall. Is there an easy (and safe) way to assign public ips and pass tru the iptables on the firewall? I mean... the only solution I came up with was to modify the deploy script on the OpenNebula box to connect to the firewall and modify the iptables rules regarding the particular VM I'm trying to deploy. That's not a very happy solution. Thanks in advance. galimba -- ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Valentin Bud http://databus.pro | valen...@databus.pro ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Opennbula VM automatic backups.
Hi Leszek, You can also schedule per VM actions [1], see Scheduling Options. snapshot-create is available but clone isn't. [1]: http://docs.opennebula.org/4.4/user/virtual_resource_management/vm_guide_2.html Best, Valentin ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] virtual network cannot get out
public Port em1 Interface em1 Port vnet0 Interface vnet0 Port public Interface public type: internal Bridge storage Port storage Interface storage type: internal Port vlan20 tag: 20 Interface vlan20 type: internal ovs_version: 2.1.0 From the opennebula server I can see this. onevnet list ID USER GROUPNAMECLUSTER TYPE BRIDGE LEASES 0 oneadmin oneadmin management ifx-produc R manageme 0 1 oneadmin oneadmin storage ifx-produc R storage 0 6 oneadmin oneadmin public ifx-produc R public 1 I've followed the instruction for configuring the hosting server so that oneadmin has rights to access /var/lib/one on the hosting server as well as sudo access to the scripts needed to create networks. I have all the changes recommended to allow oneadmin to execute commands through ssh to cloud1 the hosting server. oneadmin ALL=(ALL) NOPASSWD: /usr/sbin/tgtadm, /sbin/lvcreate, /sbin/lvremove, /bin/dd, /usr/bin/ovs-vsctl, /usr/bin/ovs-ofctl, /usr/bin/ovs-dpctl, /sbin/iptables, /sbin/ebtables I can instantiate hosts from templates and everything works as expected. When I bring up a virtual host, it gets an IP from the dhcp server running in the network. Not from the virtual network. Sorry, I can't cut and paste that part, since the only way I can access the virtual machine is through either VNC in sunstone or with virt-manager. I have another server running ovswitch that works fine. The main difference is that I used virt-manager to create the hosts, instead of opennebula. Those five virtual servers connect fine. [root@cloud2 ~]# ovs-vsctl show aa56747f-d5a2-41b0-a998-48add3c62562 Bridge public Port vnet4 Interface vnet4 Port vnet0 Interface vnet0 Port vnet3 Interface vnet3 Port public Interface public type: internal Port em1 Interface em1 Port vnet1 Interface vnet1 Port vnet2 Interface vnet2 ovs_version: 2.1.0 On cloud1 after the host gets it's IP address from the dhcp server running in our network, it can no longer connect to anything. I've checked iptables rules, flushed them for testing, just to make sure. Everything seems right, but the network isn't working. Sure would like to buy a clue. I've been searching the web for an answer or an idea what to do to diagnose it. I suspect what's happening is that opennebula/sunstone is not creating the interface properly. As I understand the ip should be assigned to the bridge, not the virtual interface. Sure could use some help. Even a pointer to a web site with the right answer would be appreciated. I haven't been able to find it myself. Sorry for cross posting, but I couldn't decide which list to post to, so I did both. -- Neil Schneider pacneil_at_linuxgeek_dot_net This is your life. Do what you love, and do it often. If you don’t like something, change it. If you don’t like your job, quit. If you don’t have enough time, stop watching TV. If you are looking for the love of your life, stop; they will be waiting for you when you start doing things you love.†___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Valentin Bud http://databus.pro | valen...@databus.pro ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] VM cannot connect to outside (internet)
Hi Sangram, On Fri, May 9, 2014 at 9:39 PM, Sangram Rath sangram.r...@gmail.com wrote: Hi, Virtual machine gets an IP through contextualization, however virtual machine is not able to connect to internet. Also from inside the VM, I do not see any other interface apart from lo. Is this normal in contextualization? I wouldn't call it normal contextualization because the VM is missing the primary Ethernet interface, eth0. Let's try figure out why. I am able to ping the VM from same host. Host is Cent OS 6.1. Host has br0 connected to interface eth0. And virbr0. Have you defined a virtual network in OpenNebula? Does that network use br0 on virbr0? Where would you want your VMs to connect to, br0 or vribr0? If you want to isolate the VMs in a private network defined on virbr0 you have to enable IP forwarding on the host and either NAT or route the virbr0 network to the outside world. It would help in troubleshooting if you can post the output of onevnet list and onevnet show the name of your virtual network or it id. Has the template that you instantiate the VM from, a NETWORK section? Can you share the output of onetemplate show name of template or id? One more thing that can help is the output of onevm show name of VM or id. What OS are you running inside the VM? It's strange that the VM doesn't have a eth0 interface. You can also check the boot logs and search for Ethernet adapters. I also think that lspci output would help you. Where do you have the VM image from? Have you built it yourself? Maybe the udev rules are still present and the interface doesn't show up because of that. Best, Valentin ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] VM cannot connect to outside (internet)
file YES - VM NICS ID NETWORK VLAN BRIDGE IP MAC 0 Public no virbr0 192.168.122.2 02:00:c0:a8:7a:02 fe80::400:c0ff:fea8:7a02 VIRTUAL MACHINE HISTORY SEQ HOSTACTION DS STARTTIME PROLOG 0 localhost none0 05/09 05:49:13 0d 23h09m 0h00m01s VIRTUAL MACHINE TEMPLATE AUTOMATIC_REQUIREMENTS=!(PUBLIC_CLOUD = YES) CONTEXT=[ DISK_ID=1, HOSTNAME=sfout.dev.redeyeelectronics.com, TARGET=hdb ] CPU=4 GRAPHICS=[ LISTEN=0.0.0.0, PORT=5948, TYPE=VNC ] MEMORY=2048 TEMPLATE_ID=5 VMID=48 7 - OS running inside VM is Ubuntu 12.04 / Ubuntu 12.10. The VM has a network interface but it is commented in /etc/network/interfaces. When I took over this setup it was like this and working. Of course in 3.8 The latest contextualization packages are supported on your OS so I see no reason why you shouldn't update them :). Let me know if you need anything else. [1]: http://docs.opennebula.org/4.6/user/virtual_machine_setup/bcont.html#bcont [2]: http://docs.opennebula.org/4.6/user/virtual_machine_setup/cong.html Best, Valentin On Sat, May 10, 2014 at 1:09 PM, Valentin Bud valentin@gmail.comwrote: Hi Sangram, On Fri, May 9, 2014 at 9:39 PM, Sangram Rath sangram.r...@gmail.comwrote: Hi, Virtual machine gets an IP through contextualization, however virtual machine is not able to connect to internet. Also from inside the VM, I do not see any other interface apart from lo. Is this normal in contextualization? I wouldn't call it normal contextualization because the VM is missing the primary Ethernet interface, eth0. Let's try figure out why. I am able to ping the VM from same host. Host is Cent OS 6.1. Host has br0 connected to interface eth0. And virbr0. Have you defined a virtual network in OpenNebula? Does that network use br0 on virbr0? Where would you want your VMs to connect to, br0 or vribr0? If you want to isolate the VMs in a private network defined on virbr0 you have to enable IP forwarding on the host and either NAT or route the virbr0 network to the outside world. It would help in troubleshooting if you can post the output of onevnet list and onevnet show the name of your virtual network or it id. Has the template that you instantiate the VM from, a NETWORK section? Can you share the output of onetemplate show name of template or id? One more thing that can help is the output of onevm show name of VM or id. What OS are you running inside the VM? It's strange that the VM doesn't have a eth0 interface. You can also check the boot logs and search for Ethernet adapters. I also think that lspci output would help you. Where do you have the VM image from? Have you built it yourself? Maybe the udev rules are still present and the interface doesn't show up because of that. Best, Valentin -- Thanks, Sangram Rath ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] DHCP and OpenNebula
Hello Christophe, If you really want to use DHCP in combination with OpenNebula I suggest you take a look over omshell [1]. With a simple hook you could enter the required DHCP host entry via OMAPI and your VM will pick it up at boot. [1]: http://www.linuxcommand.org/man_pages/omshell1.html Best, Valentin On Thu, May 8, 2014 at 6:46 PM, Ionut Popovici io...@hackaserver.comwrote: NAT don't have anything how you manage your inside network or control'it .. DHCP will not masquerade your ip's DHCP is Dinamyc Host Control Protocol this way you cand control the hosts .. or you can control manualy configuring each host or use Conextualization scripts to have ip's form your opennebula lesses http://archives.opennebula.org/documentation:rel4.4:cong On 5/8/2014 6:31 PM, Christophe Duez wrote: Owkey like that I know but I thought there was a script or something. so that you don't need to type all the IP's The problem is that I have only 1 ip-address that has connection to the internet. this ip is dedicated to a vm running NAT and DHCP so the other VM's will get an IP from this VM's DHCP and can pass through the NAT for connection to the outside world. having to put all the IP's in the DHCP config file is crazy work not? On Thu, May 8, 2014 at 4:41 PM, Ionut Popovici io...@hackaserver.comwrote: On 5/8/2014 5:09 PM, Christophe Duez wrote: I know of the mac address of opennebula but I don't know how to enter this in a dhcp config file :/ isc-dehcp-server config . /etc/dhcp/dhcpd.conf subnet 10.200.0.0 netmask 255.255.255.224 { range 10.200.0.2 10.200.0.30; option domain-name-servers 10.200.0.1; option routers 10.200.0.1; option broadcast-address 10.200.0.31; default-lease-time 600; max-lease-time 7200; } host host-2 { hardware ethernet 02:00:0a:c8:00:02; fixed-address 10.200.0.2; } host host-3 { hardware ethernet 02:00:0a:c8:00:03; fixed-address 10.200.0.3; } host host-4 { hardware ethernet 02:00:0a:c8:00:04; fixed-address 10.200.0.4; } host host-30 { hardware ethernet 02:00:0a:c8:00:1E; fixed-address 10.200.0.30; } On Thu, May 8, 2014 at 3:26 PM, Ionut Popovici io...@hackaserver.comwrote: Or you can make you dhcp server with pools and add fixed address via mac Because opennebula use very nice mac asingment based on ip transformed on hexa. Default mac for opennebula is 02:00 and other 4 hexa digits are ip address decimal tranformed on hexa. for ip 10.10.10.10 opennebula will use mac 02:00:0a:0a:0a:0a or ip 10.10.0.1 = 02:00:0a:0a:00:01 with this you can easy make an dhcp lesses for your networks On 5/8/2014 4:04 PM, Christophe Duez wrote: But I need to know how to do it. not simply install and work On Thu, May 8, 2014 at 2:58 PM, Гусев Павел pgu...@qsoft.ru wrote: I think you must us Virtual Router (from Marketplace) with DHCP daemon -- С уважением, Гусев Павел Руководитель отдела системного администрирования QSOFT | Ведущий web-интегратор офис 7(495) 771-7363 #110 | моб. 7(926) 850-1108 pgu...@qsoft.ru Москва, Авангардная улица, 3 | qsoft.ru San Francisco, 222 Columbus Ave | qsoftus.com 08.05.2014, 16:25, Christophe Duez christophe.d...@student.uantwerpen.be: Hello, Is it possible that a virtual DHCP-server gives ip addresses to the other VM's and that OpenNebula will take over this ip-address in sunstone? Because now my dhcp gives the new VM an IP-address but in the sunstone interface the VM has an other IP given by the Virtual Network template can this be changed? -- Kind regards, Duez Christophe Student at University of Antwerp : Master of Industrial Sciences: Electronics-ICT E christophe.d...@student.uantwperen.be L linkedin duez-christophehttp://www.linkedin.com/pub/duez-christophe/74/7/39 , ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Kind regards, Duez Christophe Student at University of Antwerp : Master of Industrial Sciences: Electronics-ICT E christophe.d...@student.uantwperen.be L linkedin duez-christophehttp://www.linkedin.com/pub/duez-christophe/74/7/39 ___ Users mailing listUsers@lists.opennebula.orghttp://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- M.v.g. Duez Christophe T +32497552655 E duez_christo...@hotmail.com S christophe.duez L linkedin duez-christophehttp://www.linkedin.com/pub/duez-christophe/74/7/39 -- M.v.g. Duez Christophe T +32497552655 E duez_christo...@hotmail.com S christophe.duez L linkedin duez-christophehttp://www.linkedin.com/pub/duez-christophe/74/7/39 ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Valentin Bud http://databus.pro | valen...@databus.pro
Re: [one-users] DHCP and OpenNebula
Hello Jaime, I don't have a hook for that. It was an idea I came up with for this particular use case. I don't use DHCP with OpenNebula. Anyway, good question though. I have made a little bit of research about OMAPI and how can one use it from different programming languages. Ruby: https://github.com/pikelly/dhcpIT http://www.ruby-doc.org/gems/docs/e/esxmagicwand-0.1.2/DHCP/Server.html Python: https://code.google.com/p/pyomapic/ https://github.com/jof/pypureomapi Perl: http://search.cpan.org/~jhthorsen/Net-ISC-DHCPd-0.14/lib/Net/ISC/DHCPd/OMAPI.pm Bash: http://www.jedi.be/blog/2010/12/08/automating-dhcp-management-with-omapi/(also has a Java example) http://comments.gmane.org/gmane.network.dhcp.isc.dhcp-server/11411 Hope it helps :). Best, Valentin On Fri, May 9, 2014 at 12:17 PM, Jaime Melis jme...@opennebula.org wrote: Valentin, I actually hadn't thought of that, it's a great idea :) You could even map the other context values, not just the IP! Do you have this hook per chance? cheers, Jaime On Fri, May 9, 2014 at 10:12 AM, Valentin Bud valentin@gmail.comwrote: Hello Christophe, If you really want to use DHCP in combination with OpenNebula I suggest you take a look over omshell [1]. With a simple hook you could enter the required DHCP host entry via OMAPI and your VM will pick it up at boot. [1]: http://www.linuxcommand.org/man_pages/omshell1.html Best, Valentin On Thu, May 8, 2014 at 6:46 PM, Ionut Popovici io...@hackaserver.comwrote: NAT don't have anything how you manage your inside network or control'it .. DHCP will not masquerade your ip's DHCP is Dinamyc Host Control Protocol this way you cand control the hosts .. or you can control manualy configuring each host or use Conextualization scripts to have ip's form your opennebula lesses http://archives.opennebula.org/documentation:rel4.4:cong On 5/8/2014 6:31 PM, Christophe Duez wrote: Owkey like that I know but I thought there was a script or something. so that you don't need to type all the IP's The problem is that I have only 1 ip-address that has connection to the internet. this ip is dedicated to a vm running NAT and DHCP so the other VM's will get an IP from this VM's DHCP and can pass through the NAT for connection to the outside world. having to put all the IP's in the DHCP config file is crazy work not? On Thu, May 8, 2014 at 4:41 PM, Ionut Popovici io...@hackaserver.comwrote: On 5/8/2014 5:09 PM, Christophe Duez wrote: I know of the mac address of opennebula but I don't know how to enter this in a dhcp config file :/ isc-dehcp-server config . /etc/dhcp/dhcpd.conf subnet 10.200.0.0 netmask 255.255.255.224 { range 10.200.0.2 10.200.0.30; option domain-name-servers 10.200.0.1; option routers 10.200.0.1; option broadcast-address 10.200.0.31; default-lease-time 600; max-lease-time 7200; } host host-2 { hardware ethernet 02:00:0a:c8:00:02; fixed-address 10.200.0.2; } host host-3 { hardware ethernet 02:00:0a:c8:00:03; fixed-address 10.200.0.3; } host host-4 { hardware ethernet 02:00:0a:c8:00:04; fixed-address 10.200.0.4; } host host-30 { hardware ethernet 02:00:0a:c8:00:1E; fixed-address 10.200.0.30; } On Thu, May 8, 2014 at 3:26 PM, Ionut Popovici io...@hackaserver.comwrote: Or you can make you dhcp server with pools and add fixed address via mac Because opennebula use very nice mac asingment based on ip transformed on hexa. Default mac for opennebula is 02:00 and other 4 hexa digits are ip address decimal tranformed on hexa. for ip 10.10.10.10 opennebula will use mac 02:00:0a:0a:0a:0a or ip 10.10.0.1 = 02:00:0a:0a:00:01 with this you can easy make an dhcp lesses for your networks On 5/8/2014 4:04 PM, Christophe Duez wrote: But I need to know how to do it. not simply install and work On Thu, May 8, 2014 at 2:58 PM, Гусев Павел pgu...@qsoft.ru wrote: I think you must us Virtual Router (from Marketplace) with DHCP daemon -- С уважением, Гусев Павел Руководитель отдела системного администрирования QSOFT | Ведущий web-интегратор офис 7(495) 771-7363 #110 | моб. 7(926) 850-1108 pgu...@qsoft.ru Москва, Авангардная улица, 3 | qsoft.ru San Francisco, 222 Columbus Ave | qsoftus.com 08.05.2014, 16:25, Christophe Duez christophe.d...@student.uantwerpen.be: Hello, Is it possible that a virtual DHCP-server gives ip addresses to the other VM's and that OpenNebula will take over this ip-address in sunstone? Because now my dhcp gives the new VM an IP-address but in the sunstone interface the VM has an other IP given by the Virtual Network template can this be changed? -- Kind regards, Duez Christophe Student at University of Antwerp : Master of Industrial Sciences: Electronics-ICT E christophe.d...@student.uantwperen.be L linkedin duez-christophehttp://www.linkedin.com/pub/duez-christophe/74/7/39
Re: [one-users] DHCP and OpenNebula
Hello Christophe, The best place to start is the documentation about Using Hooks [1]. Keep in mind that OpenNebula has the same level of quality in documentation like FreeBSD does ;). Whenever you have a problem check there first, 99% of the cases you will find it there. [1]: http://docs.opennebula.org/4.6/integration/infrastructure_integration/hooks.html Best, Valentin On Fri, May 9, 2014 at 12:19 PM, Christophe Duez duez_christo...@hotmail.com wrote: Is there any tutorial over how to enable hooks ? On Fri, May 9, 2014 at 10:12 AM, Valentin Bud valentin@gmail.comwrote: Hello Christophe, If you really want to use DHCP in combination with OpenNebula I suggest you take a look over omshell [1]. With a simple hook you could enter the required DHCP host entry via OMAPI and your VM will pick it up at boot. [1]: http://www.linuxcommand.org/man_pages/omshell1.html Best, Valentin On Thu, May 8, 2014 at 6:46 PM, Ionut Popovici io...@hackaserver.comwrote: NAT don't have anything how you manage your inside network or control'it .. DHCP will not masquerade your ip's DHCP is Dinamyc Host Control Protocol this way you cand control the hosts .. or you can control manualy configuring each host or use Conextualization scripts to have ip's form your opennebula lesses http://archives.opennebula.org/documentation:rel4.4:cong On 5/8/2014 6:31 PM, Christophe Duez wrote: Owkey like that I know but I thought there was a script or something. so that you don't need to type all the IP's The problem is that I have only 1 ip-address that has connection to the internet. this ip is dedicated to a vm running NAT and DHCP so the other VM's will get an IP from this VM's DHCP and can pass through the NAT for connection to the outside world. having to put all the IP's in the DHCP config file is crazy work not? On Thu, May 8, 2014 at 4:41 PM, Ionut Popovici io...@hackaserver.comwrote: On 5/8/2014 5:09 PM, Christophe Duez wrote: I know of the mac address of opennebula but I don't know how to enter this in a dhcp config file :/ isc-dehcp-server config . /etc/dhcp/dhcpd.conf subnet 10.200.0.0 netmask 255.255.255.224 { range 10.200.0.2 10.200.0.30; option domain-name-servers 10.200.0.1; option routers 10.200.0.1; option broadcast-address 10.200.0.31; default-lease-time 600; max-lease-time 7200; } host host-2 { hardware ethernet 02:00:0a:c8:00:02; fixed-address 10.200.0.2; } host host-3 { hardware ethernet 02:00:0a:c8:00:03; fixed-address 10.200.0.3; } host host-4 { hardware ethernet 02:00:0a:c8:00:04; fixed-address 10.200.0.4; } host host-30 { hardware ethernet 02:00:0a:c8:00:1E; fixed-address 10.200.0.30; } On Thu, May 8, 2014 at 3:26 PM, Ionut Popovici io...@hackaserver.comwrote: Or you can make you dhcp server with pools and add fixed address via mac Because opennebula use very nice mac asingment based on ip transformed on hexa. Default mac for opennebula is 02:00 and other 4 hexa digits are ip address decimal tranformed on hexa. for ip 10.10.10.10 opennebula will use mac 02:00:0a:0a:0a:0a or ip 10.10.0.1 = 02:00:0a:0a:00:01 with this you can easy make an dhcp lesses for your networks On 5/8/2014 4:04 PM, Christophe Duez wrote: But I need to know how to do it. not simply install and work On Thu, May 8, 2014 at 2:58 PM, Гусев Павел pgu...@qsoft.ru wrote: I think you must us Virtual Router (from Marketplace) with DHCP daemon -- С уважением, Гусев Павел Руководитель отдела системного администрирования QSOFT | Ведущий web-интегратор офис 7(495) 771-7363 #110 | моб. 7(926) 850-1108 pgu...@qsoft.ru Москва, Авангардная улица, 3 | qsoft.ru San Francisco, 222 Columbus Ave | qsoftus.com 08.05.2014, 16:25, Christophe Duez christophe.d...@student.uantwerpen.be: Hello, Is it possible that a virtual DHCP-server gives ip addresses to the other VM's and that OpenNebula will take over this ip-address in sunstone? Because now my dhcp gives the new VM an IP-address but in the sunstone interface the VM has an other IP given by the Virtual Network template can this be changed? -- Kind regards, Duez Christophe Student at University of Antwerp : Master of Industrial Sciences: Electronics-ICT E christophe.d...@student.uantwperen.be L linkedin duez-christophehttp://www.linkedin.com/pub/duez-christophe/74/7/39 , ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Kind regards, Duez Christophe Student at University of Antwerp : Master of Industrial Sciences: Electronics-ICT E christophe.d...@student.uantwperen.be L linkedin duez-christophehttp://www.linkedin.com/pub/duez-christophe/74/7/39 ___ Users mailing listUsers@lists.opennebula.orghttp://lists.opennebula.org
Re: [one-users] Contextualization
Hello Christophe, I suggest you follow Andrei's suggestion in the future. It is much cleaner and more OpenNebula way to say so. Read in line to continue with the process I have described. On Thu, Apr 17, 2014 at 2:30 PM, Christophe Duez christophe.d...@student.uantwerpen.be wrote: Hello, Thank you for the extensive responds This is what I did and where I got stuck: Export the libvirt xml of the VM from the host, virsh dumpxml one-45 /tmp/XmlDumpFile stop the VM, virsh destroy one-45 undefine the domain. virsh undefine one-45 Configure the xml to mount a local folder from the host inside the VM [1]. filesystem type='mount' accessmode='passthrough' driver type='path' wrpolicy='immediate'/ source dir='/tmp/contextualization'/ target dir='/tmp/contextualization'/ readonly/ /filesystem Somehow deliver the context package in that folder. mkdir /tmp/contextualization/ yum install opennebula-context -y --downloadonly --downloaddir=/tmp/contextualization/ Define the domain using your crafted XML, virsh define /tmp/XmlDumpFile boot the machine, # virsh list --all # virsh start one-45 After it boots up mount the shared folder in the VM and install the context package. The shared folder appears as a 9p [1] device inside the VM. You can find in [1] the way to mount it. One more thing though. The /tmp/contextualization directory and contents must be owned by oneadmin, because the VM runs under oneadmin user. This way you will be able to mount it in the VM. # chown -R /tmp/contextualization VNC to it, mount the shared folder in the host and install the deb/rpm. can you explain the boot parth of the whole process? and is this right what i did so far? Yes, it is perfectly right what you did so far. Again, Andrei's suggestion is way better and much simpler than mine. If you choose to follow the FILES approach please redefine your original one-45 machine. This way you won't loose your previous work, the already installed VM. # virsh destroy one-45 # virsh undefine one-45 # virsh define /var/lib/one/datastores/0/45/deployment.0 # virsh start one-45 Now your VM should be back under OpenNebula's control. You can stop it, add one-context to FILES datastore, modify the template to include the one-context file in the CONTEXT section. Boot the machine, mount the CONTEXT ISO (the CDROM) and there you'll have the one-context package. [1]: http://www.linux-kvm.org/page/9p_virtio Best, Valentin On Thu, Apr 17, 2014 at 11:17 AM, Valentin Bud valentin@gmail.comwrote: Hello Christophe, Does your VM have a local network connection with the host or any other computer in your local network? If that's the case you can finish the installation, reboot, connect to the VM via SSH, scp the contextualization package from a local computer that is in the same network or has access to the network the VM is part of. If you don't have VM network connectivity at all, the process I know of is a little bit tedious but doable. Export the libvirt xml of the VM from the host, stop the VM, undefine the domain. Configure the xml to mount a local folder from the host inside the VM [1]. Somehow deliver the context package in that folder. Define the domain using your crafted XML, boot the machine, VNC to it, mount the shared folder in the host and install the deb/rpm. Are you somehow building a Debian image? If so try out bootstrap-vz, a bootstraping framework for Debian specifically targeted at bootstrapping systems for virtualized environments. [1]: http://libvirt.org/formatdomain.html#elementsFilesystems Best, Valentin On Thu, Apr 17, 2014 at 11:29 AM, Christophe Duez christophe.d...@student.uantwerpen.be wrote: Hello, I followed this video from your youtube channel Bootstrapping OpenNebula 3.4 and creating a VM from scratchhttps://www.youtube.com/watch?v=fQP4NQQ9NSI. I did this with the OpenNebula 4.4.1. Almost at the end they say you have to follow the documentation to setup contextualization. I searched the documentation and found out there are 2 ways: - Install from our repositories package *one-context* in Ubuntu/Debian or *opennebula-context* in CentOS/RedHat. Instructions to add the repository at the installation guidehttp://docs.opennebula.org/4.4/design_and_installation/building_your_cloud/ignc.html#ignc . - Download and install the package for your distribution: - DEBhttp://dev.opennebula.org/attachments/download/750/one-context_4.4.0.deb: Compatible with Ubuntu 11.10 to 13.04 and Debian Squeeze - RPMhttp://dev.opennebula.org/attachments/download/747/one-context_4.4.0.rpm: Compatible with CentOS and RHEL 6.x Now the problem that I have is the following... Without internet connection downloading the one-context package from the repository is impossible, right? And downloading the Package directly isn't possible
Re: [one-users] Contextualization
Hello Christophe, Does your VM have a local network connection with the host or any other computer in your local network? If that's the case you can finish the installation, reboot, connect to the VM via SSH, scp the contextualization package from a local computer that is in the same network or has access to the network the VM is part of. If you don't have VM network connectivity at all, the process I know of is a little bit tedious but doable. Export the libvirt xml of the VM from the host, stop the VM, undefine the domain. Configure the xml to mount a local folder from the host inside the VM [1]. Somehow deliver the context package in that folder. Define the domain using your crafted XML, boot the machine, VNC to it, mount the shared folder in the host and install the deb/rpm. Are you somehow building a Debian image? If so try out bootstrap-vz, a bootstraping framework for Debian specifically targeted at bootstrapping systems for virtualized environments. [1]: http://libvirt.org/formatdomain.html#elementsFilesystems Best, Valentin On Thu, Apr 17, 2014 at 11:29 AM, Christophe Duez christophe.d...@student.uantwerpen.be wrote: Hello, I followed this video from your youtube channel Bootstrapping OpenNebula 3.4 and creating a VM from scratchhttps://www.youtube.com/watch?v=fQP4NQQ9NSI. I did this with the OpenNebula 4.4.1. Almost at the end they say you have to follow the documentation to setup contextualization. I searched the documentation and found out there are 2 ways: - Install from our repositories package *one-context* in Ubuntu/Debian or *opennebula-context* in CentOS/RedHat. Instructions to add the repository at the installation guidehttp://docs.opennebula.org/4.4/design_and_installation/building_your_cloud/ignc.html#ignc . - Download and install the package for your distribution: - DEBhttp://dev.opennebula.org/attachments/download/750/one-context_4.4.0.deb: Compatible with Ubuntu 11.10 to 13.04 and Debian Squeeze - RPMhttp://dev.opennebula.org/attachments/download/747/one-context_4.4.0.rpm: Compatible with CentOS and RHEL 6.x Now the problem that I have is the following... Without internet connection downloading the one-context package from the repository is impossible, right? And downloading the Package directly isn't possible either, is it not? So how do I enable/install the contextualization? Please help me... -- Kind regards, Duez Christophe ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Valentin Bud http://databus.pro | valen...@databus.pro ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Opennebula and openvswitch problem.
Shankhadeep are you referring to Open vSwitch 2.0? I am currently running OvS 2.0 with Libvirt 0.9.12 on Debian Wheezy. I ran into the same problem as the OP and noticed that installation order matters. First OvS, second libvirt. Running OpenNebula 4.4. On Sat, Apr 5, 2014 at 5:58 AM, Shankhadeep Shome shank15...@gmail.comwrote: Well, 2.0 will break libvirt in a couple of ways and opennebula uses libvirt so I don't think its gonna work out so well for early adopters. As for Linux 3.14, LTS gets regular kernel updates so it will have it eventually. I don't think its a good idea to stick to LTS for hypervisors, you will miss too much. KVM based hypervisors should get updated yearly to the latest stable build. VM infrastructures are easy when it comes to in place upgrades, especially opennebula. You can always add new nodes and retire older ones gracefully. On Thu, Apr 3, 2014 at 5:01 AM, Stefan Kooman ste...@bit.nl wrote: Quoting Leszek Master (keks...@gmail.com): I'm waiting for official relese of next LTS version, i can use only LTS in my production, so i was testing it on the 12.04. If there isn't any official manual how ot solve this problem i'll upgrade my distro and try then :) Thanks for your help. Precise/quantal suffers from this bug: https://bugs.launchpad.net/bugs/1084028 Fixed in newer releases: saucy / trusty (tested by me). This is apart from the legacy bridging stuff. With virtualization development happening so quickly I would recommend going for newer instead of older. Hopefully linux 3.14 and Qemu 2.0 will make it into Trusty ... Gr. Stefan -- | BIT BV http://www.bit.nl/Kamer van Koophandel 09090351 | GPG: 0xD14839C6 +31 318 648 688 / i...@bit.nl ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Valentin Bud http://databus.pro | valen...@databus.pro ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Sunstone noVNC with WSS support
Hi Stefan, On Tue, Mar 11, 2014 at 6:07 PM, Stefan Kooman ste...@bit.nl wrote: Quoting Valentin Bud (valentin@gmail.com): I totally agree with you about the user experience and for it is worth investing a few dollars. I guess I am just frustrated that TLS fails to provide peer to peer trust. So I guess it's time (for you) to deploy DANE [1,2]. You do need to have DNSSEC enabled for your domain ... and still have to trust the (root) dns servers ... but that's already a big improvement if you ask me. Especially if you operate your own dnsservers and have control over the signing process. You are absolutely right. We operate the DNS infrastructure and plan to implement DNSSEC this year. We have a mix of public and private zones and we are still researching the most simple way to get DNSSEC implemented. Do you have any tips / recommandations for the above scenario? Once I have DNSSEC up and running I will surely implement DANE :). [1]: https://tools.ietf.org/html/rfc6698 [2]: http://www.internetsociety.org/deploy360/blog/2013/12/want-to-quickly-create-a-tlsa-record-for-dane-dnssec/ -- | BIT BV http://www.bit.nl/Kamer van Koophandel 09090351 | GPG: 0xD14839C6 +31 318 648 688 / i...@bit.nl -- Valentin Bud http://databus.pro | valen...@databus.pro ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Sunstone noVNC with WSS support
Hello Wilma, On Thu, Feb 6, 2014 at 6:20 PM, Wilma Hermann wilma.herm...@gmail.comwrote: There is a really easy fix for that: Get a real certificate from a real CA. You should not use self-signed certs for a production environment. And why is that? Is Verisign's random number generator better than yours? A real certificate from a real CA? I don't get that. Last time I checked, my CA looked pretty real to me, conforming with RFC 5280. And the certificates from the browser and VPNs issued by that CA are also real. None of the RFCs I've read about PKI don't tell me that I SHOULD NOT use self signed certs for production environments. Your business's image could suffer from a self signed cert but that's another story. Technology is technology and it should work either way, be it self signed or not. Best, Valentin Greetings Wilma 2014-02-06 ML mail mlnos...@yahoo.com: This workaround fixes that problem yes but it is not a good workaround especially if you want to offer opennebula to real customers. I hope another better alternative can be found in the future but I am aware that this is mostly a browser problem :| Regards ML On Thursday, February 6, 2014 10:56 AM, Daniel Molina dmol...@opennebula.org wrote: Hi, On 5 February 2014 16:58, ML mail mlnos...@yahoo.com wrote: Hello, I would like to use noVNC in Sunstone over an encrypted channel (WSS). Therefore I have generated my own SSL key and certificate which I have added to the sunstone-server.conf configuration. The problem is that this does not work, when I start VNC from the Sunstone web interface I get the following error message in novnc.log: SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca Does this mean I need an official SSL certificate? Please, check if the solution proposed in this thread, fixes your problem http://lists.opennebula.org/pipermail/users-opennebula.org/2014-February/026405.html Cheers Regards ML ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- -- Daniel Molina Project Engineer OpenNebula - Flexible Enterprise Cloud Made Simple www.OpenNebula.org http://www.opennebula.org/ | dmol...@opennebula.org| @OpenNebula ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Valentin Bud http://databus.pro | valen...@databus.pro ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Sunstone noVNC with WSS support
Hi, On Thu, Feb 6, 2014 at 11:54 PM, ML mail mlnos...@yahoo.com wrote: You are totally right, production needs a real cert... I kindly disagree. People need education. Best, Valentin Regards, ML On Thursday, February 6, 2014 5:20 PM, Wilma Hermann wilma.herm...@gmail.com wrote: There is a really easy fix for that: Get a real certificate from a real CA. You should not use self-signed certs for a production environment. Greetings Wilma 2014-02-06 ML mail mlnos...@yahoo.com: This workaround fixes that problem yes but it is not a good workaround especially if you want to offer opennebula to real customers. I hope another better alternative can be found in the future but I am aware that this is mostly a browser problem :| Regards ML On Thursday, February 6, 2014 10:56 AM, Daniel Molina dmol...@opennebula.org wrote: Hi, On 5 February 2014 16:58, ML mail mlnos...@yahoo.com wrote: Hello, I would like to use noVNC in Sunstone over an encrypted channel (WSS). Therefore I have generated my own SSL key and certificate which I have added to the sunstone-server.conf configuration. The problem is that this does not work, when I start VNC from the Sunstone web interface I get the following error message in novnc.log: SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca Does this mean I need an official SSL certificate? Please, check if the solution proposed in this thread, fixes your problem http://lists.opennebula.org/pipermail/users-opennebula.org/2014-February/026405.html Cheers Regards ML ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- -- Daniel Molina Project Engineer OpenNebula - Flexible Enterprise Cloud Made Simple www.OpenNebula.org http://www.opennebula.org/ | dmol...@opennebula.org| @OpenNebula ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Valentin Bud http://databus.pro | valen...@databus.pro ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] could not ssh on to same same machine as oneadmin
Hi, As root execute the following: # su - oneadmin As oneadmin generate the SSH key pair, add pub key to authorized_keys and you are good to go. $ ssh-keygen -t rsa -b 4096 $ cat ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys $ chmod 600 ~/.ssh/authorized_keys $ ssh localhost Cheers and Goodwill, On Mon, Feb 3, 2014 at 8:03 PM, Neelaya Dhatchayani neels.v...@gmail.comwrote: hi i am trying ssh passwordless into same machine but it vain... username: oneadmin hostname:onedaemon with root user no problem please help ... thanks neelaya ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Valentin Bud http://databus.pro | valen...@databus.pro ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] (RESEND) need to create Flows? for openvswitch-based ONE (4.2) setup -- (passed on ebtables)
Hello Mark, On Thu, Nov 21, 2013 at 01:01:17PM -0600, Mark Biggers wrote: Hello Valentin, thanks for the reply. On 11/21/2013 03:30 AM, Valentin Bud wrote: Hello Mark, Before pointing you to the problem I think your config has you should also check that you have routing enabled in the machine Are you speaking of ip route routes? Or some sysconf variable? Or, route(s) on the VMs themselves?? I was actually speaking about IP Forwarding, to be precise. That is attained via a sysctl. It should be set to 1. To make the changes persistent after reboot you have /etc/sysctl.conf. NOTE: Paths, configs might be different, I have once used OpenSuse, a billion years ago. # sysctl net.ipv4.ip_forward I have attempted ip route, to route to the VMs 10.0.0.0/24 network Where's 10.0.0.1 supposed be -- on the vbr0? Yes, 10.0.0.1 should be configured on the vbr0 interface and eth0 shouldn't be part of your Open vSwitch. and if you want Internet connectivity for VMs also NAT vbr0 over eth0. I want (1) internet connectivity to the VMs (VLANs) and connectivity out of the VMs network. Not sure how to get this going, though it appears the OVSwitch has all the MAC-addr info, for the VMs... Don't forget to NAT your 10/24 network over eth0 to have Internet connectivity available in the machines. Also you should have a DNS server running at 10.0.0.1 or change DNS from vnet to some recursive DNS server you have in your network. I suspect 192.168.1.1 is your DNS recursive server. If you want services from the VMs accessible from 192.168.1/24 network you can forward ports to specific VMs from your laptop (host) using iptables. Let's say you want to access a Web Server located on the VM with IP Address 10.0.0.100. You'd have to DNAT port 80 from eth0 IP Address to 10.0.0.100 port 80. And have your webserver available at http://eth0.ip.addr.ess. This might not be the case if your router has routes to 10/24 network through your laptop's eth0 interface. Currently, it does not. Just attempted, and can't ping to 10.0.0.3 VM. (output, below). If you want to access your VMs from the 192.168.1.0/24 you shouldn't NAT the 10/24 network over eth0 but configure your Netgear router with a route to 10/24 via your eth0's IP Address. I would recommend to enter a MAC address - IP binding in the router so your laptop receives the same IP Address on eth0 every time it connects to the network. Reading once again your config I see you've inserted eth0 in the vbr0 OvS bridge and that it has an IP address from the 192.168.1.0/24http://192.168.1.0/24 network. I suspect that is your local network. Yes, 192.168.1.0/24 is my external (laptop) network, including a Netgear router at 192.168.1.1. Do you have connectivity between your VMs using this setup? You should from what your setup tells me. The VMs, at 10.0.0.3 and 10.0.0.4 can ping each other, and ssh works between them just fine. They can only see the 10.0.0.0/24 network, and can't ping 10.0.0.1. They can't ping 10.0.0.1 because that IP Address hasn't been configured on any interface. Please follow the steps I have outlined in my previous E-Mail if you want NAT-ed setup. That should get you started in no time. Here they are: # ovs-vsctl del-port vbr0 eth0 # dhclient eth0 ( or set its IP address manually) # ovs-vsctl set Port vbr0 tag=0 # ip addr add 10.0.0.1/24 dev vbr0 # iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -d 0.0.0.0/0 -o eth0 -j MASQUERADE thank you, mark Hope it helps. Cheers and Goodwill, r...@sealion.ine.corp:~mailto:r...@sealion.ine.corp:~ # route add -net 10.0.0.0/24 gw 192.168.1.250 dev eth0 SIOCADDRT: Network is unreachable r...@sealion.ine.corp:~mailto:r...@sealion.ine.corp:~ # route add -net 10.0.0.0/24 gw 192.168.1.250 dev vbr0 r...@sealion.ine.corp:~mailto:r...@sealion.ine.corp:~ # ping 10.0.0.3 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. From 192.168.1.250 icmp_seq=1 Destination Host Unreachable From 192.168.1.250 icmp_seq=2 Destination Host Unreachable From 192.168.1.250 icmp_seq=3 Destination Host Unreachable From 192.168.1.250 icmp_seq=4 Destination Host Unreachable ^C --- 10.0.0.3 ping statistics --- 4 packets transmitted, 0 received, +4 errors, 100% packet loss, time 2999ms pipe 4 r...@sealion.ine.corp:~mailto:r...@sealion.ine.corp:~ # netstat -nr Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 192.168.1.1 0.0.0.0 UG0 0 0 vbr0 10.0.0.0192.168.1.250 255.255.255.0 UG0 0 0 vbr0 67.139.46.149 192.168.1.1 255.255.255.255 UGH 0 0 0 vbr0 127.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 lo 127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 lo 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 vbr0 r...@sealion.ine.corp
Re: [one-users] (RESEND) need to create Flows? for openvswitch-based ONE (4.2) setup -- (passed on ebtables)
GROUPNAMESTAT UCPUUMEM HOST TIME 41 oneadmin oneadmin one-vr42stop1768M 6d 00h26 42 oneadmin oneadmin vyatta-router runn0768M sealion.in 0d 16h48 43 oneadmin oneadmin vyatta-router-0 runn0768M sealion.in 0d 16h48 Script done on Wed 20 Nov 2013 04:59:17 PM EST Script started on Wed 20 Nov 2013 05:23:22 PM EST oneadmin@sealion:~ onevm show 42 VIRTUAL MACHINE 42 INFORMATION ID : 42 NAME: vyatta-router USER: oneadmin GROUP : oneadmin STATE : ACTIVE LCM_STATE : RUNNING RESCHED : No HOST: sealion.ine.corp START TIME : 11/14 16:55:09 END TIME: 11/15 09:43:24 DEPLOY ID : one-42 VIRTUAL MACHINE MONITORING USED MEMORY : 768M USED CPU: 0 NET_TX : 0K NET_RX : 533K PERMISSIONS OWNER : um- GROUP : --- OTHER : --- VM DISKS ID TARGET IMAGE TYPE SAVE SAVE_AS 0 vdaVyatta Core 6.5R1 - kvm file NO - VM NICS ID NETWORK VLAN BRIDGE IP MAC 0 ovsnet_0_0yes vbr0 10.0.0.3 02:00:0a:00:00:03 fe80::400:aff:fe00:3 VIRTUAL MACHINE HISTORY SEQ HOSTACTION REAS STARTTIME PROLOG 0 sealion.ine.cor stop user 11/14 16:55:10 0d 00h14m 0h00m23s 1 sealion.ine.cor none erro 11/15 09:37:31 0d 00h00m 0h00m00s 2 sealion.ine.cor none erro 11/15 09:43:01 0d 00h00m 0h00m23s 3 sealion.ine.cor stop user 11/15 14:16:01 0d 03h15m 0h00m22s 4 sealion.ine.cor stop user 11/20 11:27:59 0d 02h40m 0h00m00s 5 sealion.ine.cor none none 11/20 14:08:59 0d 03h14m 0h00m00s USER TEMPLATE ERROR=Fri Nov 15 09:43:24 2013 : Error executing image transfer script: Error creating ISO symbolic link VIRTUAL MACHINE TEMPLATE CONTEXT=[ DISK_ID=1, HOSTNAME=MAINHOST, IMAGE_UNAME=oneadmin, IP_GEN=192.168.122.42, TARGET=vdb ] CPU=1 GRAPHICS=[ LISTEN=0.0.0.0, PORT=5942, TYPE=vnc ] MEMORY=768 OS=[ ARCH=i686 ] TEMPLATE_ID=44 VMID=42 oneadmin@sealion:~ onevm show 43 VIRTUAL MACHINE 43 INFORMATION ID : 43 NAME: vyatta-router-02 USER: oneadmin GROUP : oneadmin STATE : ACTIVE LCM_STATE : RUNNING RESCHED : No HOST: sealion.ine.corp START TIME : 11/14 16:55:54 END TIME: 11/15 09:43:54 DEPLOY ID : one-43 VIRTUAL MACHINE MONITORING USED MEMORY : 768M USED CPU: 0 NET_TX : 0K NET_RX : 464K PERMISSIONS OWNER : um- GROUP : --- OTHER : --- VM DISKS ID TARGET IMAGE TYPE SAVE SAVE_AS 0 vdaVyatta Core 6.5R1 - kvm file NO - VM NICS ID NETWORK VLAN BRIDGE IP MAC 0 ovsnet_0_0yes vbr0 10.0.0.4 02:00:0a:00:00:04 fe80::400:aff:fe00:4 VIRTUAL MACHINE HISTORY SEQ HOSTACTION REAS STARTTIME PROLOG 0 sealion.ine.cor stop user 11/14 16:56:10 0d 00h14m 0h00m21s 1 sealion.ine.cor none erro 11/15 09:38:01 0d 00h00m 0h00m00s 2 sealion.ine.cor none erro 11/15 09:43:31 0d 00h00m 0h00m22s 3 sealion.ine.cor stop user 11/15 14:17:01 0d 03h14m 0h00m24s 4 sealion.ine.cor stop user 11/20 11:28:29 0d 02h39m 0h00m00s 5 sealion.ine.cor none none 11/20 14:33:59 0d 02h49m 0h00m00s USER TEMPLATE ERROR=Fri Nov 15 09:43:53 2013 : Error executing image transfer script: Error creating ISO symbolic link VIRTUAL MACHINE TEMPLATE CONTEXT=[ DISK_ID=1, HOSTNAME=MAINHOST, IMAGE_UNAME=oneadmin, IP_GEN=192.168.122.43, TARGET=vdb ] CPU=1 GRAPHICS=[ LISTEN=0.0.0.0, PORT=5943, TYPE=vnc ] MEMORY=768 OS=[ ARCH=i686 ] TEMPLATE_ID=44 VMID=43 oneadmin@sealion:~ exit exit Script done on Wed 20 Nov 2013 05:23:33 PM EST 1 ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Valentin Bud http://databus.pro | valen...@databus.pro ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] error creating vm
Hi Neelaya, The error states that it cannot connect to host onehost2 via SSH because of Permission denied. Can oneadmin SSH without a password (read private/public key) to onehost2? See the docs [1] how to achieve that. [1]: http://opennebula.org/documentation:rel4.2:ignc#secure_shell_access_front-end Cheers and Goodwill, On Thu, Nov 21, 2013 at 8:42 AM, Neelaya Dhatchayani neels.v...@gmail.comwrote: Hi I tried creating vm using tm_ssh . i got the error as unable to login to onedaemon (the host which holds the opennebula frontend). Then i started the sshd on onedaemon. Again i got the error . It asks me to login to onedaemon through ssh passwordless. I do not understand why should i login to frontend through ssh the following is the error message i m receiving Mon Nov 18 22:57:22 2013 [DiM][I]: New VM state is FAILED Mon Nov 18 23:04:28 2013 [DiM][I]: New VM state is CLEANUP. Mon Nov 18 23:04:28 2013 [DiM][I]: New VM state is PENDING Mon Nov 18 23:04:50 2013 [DiM][I]: New VM state is ACTIVE. Mon Nov 18 23:04:50 2013 [LCM][I]: New VM state is PROLOG. Mon Nov 18 23:04:51 2013 [TM][I]: Command execution fail: /var/lib/one/remotes/tm/ssh/clone onedaemon:/var/lib/one/datastores/1/04d23a705c946ca0584501de685c7ec4 onehost2:/var/lib/one//datastores/0/1/disk.0 1 1 Mon Nov 18 23:04:51 2013 [TM][I]: clone: Cloning onedaemon:/var/lib/one/datastores/1/04d23a705c946ca0584501de685c7ec4 in /var/lib/one/datastores/0/1/disk.0 Mon Nov 18 23:04:51 2013 [TM][E]: clone: Command scp -r onedaemon:/var/lib/one/datastores/1/04d23a705c946ca0584501de685c7ec4 onehost2:/var/lib/one//datastores/0/1/disk.0 failed: Permission denied, please try again. Mon Nov 18 23:04:51 2013 [TM][I]: Permission denied, please try again. Mon Nov 18 23:04:51 2013 [TM][I]: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password). Mon Nov 18 23:04:51 2013 [TM][E]: Error copying onedaemon:/var/lib/one/datastores/1/04d23a705c946ca0584501de685c7ec4 to onehost2:/var/lib/one//datastores/0/1/disk.0 Mon Nov 18 23:04:51 2013 [TM][I]: ExitCode: 1 Mon Nov 18 23:04:51 2013 [TM][E]: Error executing image transfer script: Error copying onedaemon:/var/lib/one/datastores/1/04d23a705c946ca0584501de685c7ec4 to onehost2:/var/lib/one//datastores/0/1/disk.0 Mon Nov 18 23:04:51 2013 [DiM][I]: New VM state is FAILED pls any one knows about this thanks neels ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Valentin Bud http://databus.pro | valen...@databus.pro ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Export image
Hi Ionut, No need to export the image, just onevm show the image you want to move, search for SOURCE and copy file pointed there in the new location. Cheers, On Fri, Nov 8, 2013 at 1:43 PM, Ionut Popovici io...@hackaserver.comwrote: Is there a way to export an image that is in datastore. I wana export the image bacase i wana move to new sever some images .. But the servers are diferent location so i was thinking to move the via ftp or someting . ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Valentin Bud http://databus.pro | valen...@databus.pro ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] OpenNebula LVM Datastore 4.2 Documentation
Hi Jaime, Thanks for the fix. I guess I have missed the sudoers files from the sources. Cheers and Goodwill, On Wed, Nov 6, 2013 at 5:23 PM, Jaime Melis jme...@opennebula.org wrote: Hi Valentin, fixed! thanks for the feedback. By the way, there is a comprehensive list of all sudo required permissions here: https://github.com/OpenNebula/one/blob/master/share/pkgs/CentOS/opennebula.sudoers https://github.com/OpenNebula/one/blob/master/share/pkgs/Debian/opennebula.sudoers https://github.com/OpenNebula/one/blob/master/share/pkgs/Ubuntu/opennebula.sudoers https://github.com/OpenNebula/one/blob/master/share/pkgs/openSUSE/opennebula.sudoers cheers, Jaime On Thu, Oct 31, 2013 at 10:59 AM, Valentin Bud valentin@gmail.comwrote: Dear Community, I was trying to use the LVM datastore driver [1] and followed the docs to set it up. I have found a bug in the docs. oneadmin needs password-less sudo permissions for: * lvremove * lvcreate * lvs * dd * *vgdisplay* vgdisplay(8) is used to monitor [2] the datastore. Without this the datastore reports 0 size and images cannot be registered. [1]: http://opennebula.org/documentation:rel4.2:lvm_ds [2]: https://github.com/OpenNebula/one/blob/release-4.2/src/datastore_mad/remotes/lvm/monitor#L59 Good Will, -- Valentin Bud databus.pro ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Jaime Melis Project Engineer OpenNebula - Flexible Enterprise Cloud Made Simple www.OpenNebula.org | jme...@opennebula.org -- Valentin Bud http://databus.pro | valen...@databus.pro ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] OpenNebual Virtual Router 4.2 login failed, password needed.
Hello Yuchang, Have you set up ROOT_PUBKEY or ROOT_PASSWORD in the CONTEXT of the Virtual Router VMs' template, as stated in the Virtual Router documentation [1] in Configuration section? [1]: http://opennebula.org/documentation:rel4.2:router Good Will, On Thu, Oct 31, 2013 at 3:38 AM, yuchangw yuchang_subscr...@126.com wrote: Hi, I download OpenNebual Virtual Router 4.2 which is apline linux based.When I start the virutal machine with libvirt's virtual machine manager, I cannot login without password,the account used is root. So I try Virtual Router 3.8(root,password:router).It works ok. Anyone help me? thanks yuchang ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Valentin Bud http://databus.pro | valen...@databus.pro ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
[one-users] OpenNebula LVM Datastore 4.2 Documentation
Dear Community, I was trying to use the LVM datastore driver [1] and followed the docs to set it up. I have found a bug in the docs. oneadmin needs password-less sudo permissions for: * lvremove * lvcreate * lvs * dd * *vgdisplay* vgdisplay(8) is used to monitor [2] the datastore. Without this the datastore reports 0 size and images cannot be registered. [1]: http://opennebula.org/documentation:rel4.2:lvm_ds [2]: https://github.com/OpenNebula/one/blob/release-4.2/src/datastore_mad/remotes/lvm/monitor#L59 Good Will, -- Valentin Bud databus.pro ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] OpenNebula and IBM Blade V7000 FC
Hi Pedro, Aren't the LVM drivers [1] what you are looking for? [1]: http://opennebula.org/documentation:rel4.2:lvm_ds Cheers, On Thu, Oct 24, 2013 at 10:07 PM, Pedro Roger proger...@gmail.com wrote: Hi, We have four IBM Blade HS22 connected trough Fibre Channel to an IBM V7000 Storwize, and we are planning to use LVM as a shared volume group beetwen the hosts, but i can't see an pure LVM driver in current documentation of OpenNebula, i only see an iSCSI/LVM implementation. Is this configuration good for implementing OpenNebula, or i must go to use the iSCSI ports? Or yet someone can sugest an best implementation using the V7000 though Fibre Channel? Thanks in advance -- -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Pedro Roger Magalhães Vasconcelos http://www.proger.eti.br ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Valentin Bud http://databus.pro | valen...@databus.pro ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] VM fails to boot for architecture reasons
Hi Mark, In the jenkins template you have ARCH=x86_64 and in the second OS = [ ARCH=x86_64 ]. The latter is the correct way of setting the architecture for the VM. See the OS and Boot Options Section [1] from the docs. On Wed, Oct 23, 2013 at 12:29:27PM +0200, Mark Kusch wrote: cat oneconfig/vms/jenkins.tmpl NAME = jenkins.cmshared.mms-at-work.de MEMORY = 16384 CPU = 2 VCPU = 4 ARCH = x86_64 s/ARCH/OS = [ ARCH = x86_64 ]/ DISK = [ IMAGE = jenkins.cmshared.mms-at-work.de ] DISK = [ TYPE = swap, SIZE = 4096 ] NIC = [ NETWORK = MMS, IP = 192.168.198.10 ] GRAPHICS = [ TYPE = vnc, LISTEN = 0.0.0.0 ] cat oneconfig/templates/CentOS-6.4_x86_64-BASE NAME = CentOS-6.4_x86_64-BASE CPU= 1 VCPU = 2 MEMORY = 4096 OS = [ arch = x86_64 ] Here it's correct and thus it works :-). DISK = [ IMAGE = CentOS-6.4_x86_64 ] DISK = [ TYPE = swap, SIZE = 2048 ] [1]: http://opennebula.org/documentation:rel4.2:template#os_and_boot_options_section Good Will, Valentin ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] OpenNebulaConf 2013: Recordings, Presentations and Pics
Hello All, Nice indeed. Thanks! On Thu, Oct 10, 2013 at 6:07 PM, Liu, Guang Jun (Gene) gene@alcatel-lucent.com wrote: Very nice. Thanks! Gene Liu On Thu 10 Oct 2013 10:59:58 AM EDT, Tino Vazquez wrote: Dear OpenNebula users, A very good atmosphere was created and breathed by all participants at the OpenNebulaConf 2013 [1] last month in Berlin. Hugely interesting, techie chats were held all over the place in small groups, with people taking the chance to explain how do they use OpenNebula in their infrastructures to build the IaaS layer that they want, not the one that the Cloud Management Platform imposes. That is the great thing about OpenNebula’s modular architecture. So if you want to remember the great ambient of the conference, or if you haven’t got a chance to attend, here is your opportunity to (re)visit the knowledge shared in the conference in the form of recordings [2] of the keynotes and talks. Also, you can check out the presentations [3] of the speakers if you want to consult a particular detail that you do not quite remember. And, to make the experience even more immersing, scout through the conference pictures [4]. All the above is great, and you will feel almost as if you were there if you didn’t. And we say almost because, as CentOS project director, Karanbir Singh, said [5], the real value of a conference is what happened behind the scenes, between talks, during the fantastic evening event. So, if you liked what you just saw, make sure you **save the date for 2-4 December, 2014!**. The OpenNebula Team [1]http://opennebulaconf.com [2] VIDEOS http://opennebulaconf.com/previous/2013-09/presentations-2013/ [3] SLIDES http://opennebulaconf.com/previous/2013-09/slides-2013/ [4] PICS http://opennebulaconf.com/previous/2013-09/pictures-2013/ [5]https://twitter.com/kbsingh/status/383538439808774145 -- OpenNebula - Flexible Enterprise Cloud Made Simple -- Constantino Vázquez Blanco, PhD, MSc Senior Infrastructure Architect at C12G Labs www.c12g.com | @C12G | es.linkedin.com/in/tinova -- Confidentiality Warning: The information contained in this e-mail and any accompanying documents, unless otherwise expressly indicated, is confidential and privileged, and is intended solely for the person and/or entity to whom it is addressed (i.e. those identified in the To and cc box). They are the property of C12G Labs S.L.. Unauthorized distribution, review, use, disclosure, or copying of this communication, or any part thereof, is strictly prohibited and may be unlawful. If you have received this e-mail in error, please notify us immediately by e-mail at ab...@c12g.com and delete the e-mail and attachments and any copy from your system. C12G thanks you for your cooperation. ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Valentin Bud http://databus.pro | valen...@databus.pro ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] opennebula front-end reboot ip change
Hello hansz, Indeed, the vmcontext script modifies the IP Address of the node is installed on. It uses the interface MAC address to compose the IP Address. You don't need, neither want the one-context package on the frontend or compute nodes. You need it on VMs you want contextualized. Cheers Good Will, Valentin Bud On Wed, Oct 9, 2013 at 6:24 AM, hansz hanshizhun...@126.com wrote: i have found if i install the opennebula-context.rpm ,if i reboot my front-end it 's ip change ,if i chkconfig vmcontext off reboot ,ip not change 在 2013-10-09 00:12:58,Carlos Martín Sánchez cmar...@opennebula.org 写道: Hi, On Sun, Sep 29, 2013 at 11:02 AM, hansz hanshizhun...@126.com wrote: hi,man this days i have found a question, i install opennebula 4.2 on centos 6.4 (as the front-end) but everytime i reboot the front-end it's ip always change to i different one, where mybe has a wrong config? pls give help OpenNebula does not make any changes to the frontend networking configuration, you can configure any IP you need. Regards -- Carlos Martín, MSc Project Engineer OpenNebula - Flexible Enterprise Cloud Made Simple www.OpenNebula.org | cmar...@opennebula.org | @OpenNebulahttp://twitter.com/opennebulacmar...@opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Valentin Bud http://databus.pro | valen...@databus.pro ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Problem with Openvswitch on OpenNebula3.8
Hey there, I have the following setup working really well. OS -- $ uname -a Linux godzilla 3.2.0-4-amd64 #1 SMP Debian 3.2.46-1+deb7u1 x86_64 GNU/Linux KVM LIBVIRT -- dpkg -l | egrep libvirt|kvm ii libsys-virt-perl 0.9.12-2 amd64Perl module providing an extension for the libvirt library ii libvirt-bin 0.9.12-11+deb7u1 amd64programs for the libvirt library ii libvirt0 0.9.12-11+deb7u1 amd64library for interfacing with different virtualization systems ii qemu-kvm 1.1.2+dfsg-6 amd64Full virtualization on x86 hardware OPENVSWITCH -- $ dpkg -l | grep openvswitch ii openvswitch-common1.11.0-1 amd64Open vSwitch common components ii openvswitch-datapath-dkms 1.11.0-1 all Open vSwitch datapath module source - DKMS version ii openvswitch-switch1.11.0-1 amd64Open vSwitch switch implementations OPENNEBULA -- $ dpkg -l | grep opennebula ii opennebula4.2.0-1 amd64controller which executes the OpenNebula cluster services ii opennebula-common 4.2.0-1 all empty package to create OpenNebula users and directories ii opennebula-node 4.2.0-1 all empty package to prepare a machine as OpenNebula Node ii opennebula-sunstone 4.2.0-1 all web interface to which executes the OpenNebula cluster services ii opennebula-tools 4.2.0-1 all Command-line tools for OpenNebula Cloud ii ruby-opennebula 4.2.0-1 all Ruby bindings for OpenNebula Cloud API (OCA) On Thu, Oct 03, 2013 at 09:53:44AM +0800, 木易@'4武 wrote: Thank you very much, It work ,I forgot to define the network of the host. And last,one question,can openvswitch 1.10 or higher be used for openNebula? 3.8 and 4.2 I found that brcompat mod is no longer exist. The 3.8 Open vSwitch documentation [1] does mention that the bridge compatibility layer be installed. The 4.2 Open vSwitch documentation [2] says that KVM doesn't need Linux bridge compatibility layer. Although XEN does need it and a different network manager driver, ovswitch_brcompat, is provided for this scenario. Previous the above setup I've ran Debian Squeeze with libvirt, qemu from backports and Open vSwitch 1.10 with OpenNebula 3.8 branch. It worked without bridge compatibility layer. I suggest you use the latest version from OpenNebula. I have noticed that Open vSwitch releases are stable and easy to package for DEB/RPM systems so I try to keep up with the latest release. [1]: http://opennebula.org/documentation:archives:rel3.8:openvswitch [2]: http://opennebula.org/documentation:rel4.2:openvswitch Good Will, Valentin -- 原始邮件 -- 发件人: Valentin Bud;valentin@gmail.com; 发送时间: 2013年10月3日(星期四) 凌晨0:05 收件人: 木易@'4武 yangz...@qq.com; 抄送: usersusers@lists.opennebula.org; 主题: Re: 回复: [one-users] Problem with Openvswitch on OpenNebula3.8 Hi there, You should create the second host on the frontend with something like the following: # onehost create second_host -i kvm -v kvm -n ovswitch Have you done that already? Then on the second machine you must have an OVS bridge created with the name as specified in the vnet you have inside OpenNebula. Good Will, On Wed, Oct 2, 2013 at 6:39 PM, 木易@'4武 yangz...@qq.com wrote: Thanks for your reply, 1、Should I configure the driver in the second host? The network drive only be set in the frontend. I had configure lieted bellow environment in the second host, ruby environment; kvm environment; oneadmin user with no password to use sudo; no password access frontend each other; ovs 1.9.0 version with brcompat; 2、Yes, I can make ovs bridge create on the second machine, Using no-password ssh. Normally,the network driver dedined in the frontend,and frontend use ssh to execute the ovs command through ssh. but, when I build a VM in the second host frontend.It did nothing about it. 原始邮件 -- 发件人: Valentin Bud;valentin@gmail.com; 发送时间: 2013年10月2日(星期三) 晚上10:57 收件人: 木易@'4武 yangz...@qq.com; 抄送: usersusers@lists.opennebula.org; 主题: Re: [one-users] Problem with Openvswitch on OpenNebula3.8 Hello, Is your second host configured to use the ovswitch network driver? Is the ovs bridge created on the second machine with the name used in the vnet you've defined? Is you sudo configured to allow oneadmin to issue ovs-* commands without password? On Wed, Oct 2, 2013 at 5:38 PM, 木易@'4武 yangz...@qq.com wrote: Hi, I’m trying to Configure OVS network in OpenNebula3.8,and has been succeed it in the frontend host(frontendhost-end together).But When I build a VM in another host,it can't add a ovs bridge. In normal
Re: [one-users] OpenNebula and DHCP Server
Hello Fazli, I will make some assumptions about your infrastructure and provide possible approach(es). * Your KVM nodes have a single Ethernet interface, eth0, connected in a switch and a router used as the default gateway for the 192.168.1/24 network, * Also the frontend is connected via the same switch with the rest of the nodes, * You have a br0 bridge with eth0 connected to it on each node and also the frontend, * Your frontend is also a node. If you have access to the router the simplest way would be to add an IP Address alias on the router interface as the default gateway for the new network. Configure a new network inside OpenNebula for that using the chosen subnet and the same bridge, br0. I don't know if you have any kind of security policies in place but be careful that in this way there is no Layer 2 separation and traffic between the two subnets is visible with tcpdump or other sniffers. The second approach I can think about is to have the frontend configured with the first IP Address from the new subnet on br0 and define a new network inside OpenNebula like the above. I don't know if this would work though.The NAT must be done for 10.100.0/24 over 192.168.1.X (the IP Address of frontend from 192.168.1/24 subnet). What I don't know is if iptables can MASQUERADE subnets on the same interface. Never tried it, it might work. Another approach that come to mind is to use the Virtual Router and define a new subnet on the same br0 bridge. The Virtual Router would have an interface connected to 192.168.1/24 network and one in the 10.100.0/24 one. Setup it up to have the first IP Address from the 10.100.0/24 network so it is the default gateway. The same applies, traffic over L2 is not separated in anyway. One more idea :-) would be to use Open vSwitch and GRE tunnels between the nodes. In this way you can use VLANs and transport over GRE between nodes. You can also setup IPSec encrypted GRE tunnels if you want security. It might be overkill but again it depends on your requirements. Another working setup I have done is to use tinc VPN [1] between nodes in switch mode and connect it to the Open vSwitch from each host as a port. This way traffic that travels between nodes is fully encrypted and you can use the same L2 network in a secure fashion. But maybe the best approach would be to have a second network card, eth1, in each node. Connect that second card in an Open vSwitch and use VLANs with the frontend being the router, or any other node for that matter. [1]: http://www.tinc-vpn.org/ Good Will, Valentin On Thu, Oct 03, 2013 at 09:18:41AM +0800, M Fazli A Jalaluddin wrote: Hello Valentin, My setup for OpenNebula is 1 Front-end and several KVM nodes. The front-end and nodes are using IP address 192.168.1.xxx and are able to connect to the internet. The current networking setup for the VM is using dummy and bridge, br0. So, for the VM able to access to the internet, is by assigning them 192.168.1.xxx IP addresses. If I have many VMs, IP address 192.168.1.xxx will be depleted. Hence, I need to make a new private network such as, 10.0.1.xxx which will map to only a single 192.168.1.xxx, e.g 192.168.1.5. Thank you. Regards, Fazli On Wed, Oct 2, 2013 at 7:21 PM, Valentin Bud valentin@gmail.com wrote: Hello Fazli, The Virtual Router documentation [1] is definitely a good place to start. On Wed, Oct 2, 2013 at 1:57 PM, M Fazli A Jalaluddin fazli.jalalud...@gmail.com wrote: Hi, Is there any tutorial on how to use the VirtualRouter? I have download the image from Marketplace and Deploy a VM out of it. Then what should I do? My concern is that the Multiple VM will be able to be assigned a private IP address (at the same time connect to the internet) while the KVM host is using public IP address. I don't really understand your concern. Could you be more specific? Yes, every VM will get a private IP address from the Router in case you connect it to the private network. If you connect the VM to the public network too you'd have to setup the IP address on the VM. If context package is installed in the VM it'll autoconfigure the public IP also. [1]: http://opennebula.org/documentation:rel4.2:router Good Will, Thank you On Wed, Oct 2, 2013 at 4:26 PM, Carlos Martín Sánchez cmar...@opennebula.org wrote: Hi, On Wed, Oct 2, 2013 at 6:56 AM, M Fazli A Jalaluddin fazli.jalalud...@gmail.com wrote: Hi, May I know if the Virtual Router provide NAT? Yes, look for the Full Router section in the documentation: http://opennebula.org/documentation:rel4.2:router PS: Please reply also to the mailing list Regards. -- Carlos Martín, MSc Project Engineer OpenNebula - Flexible Enterprise Cloud Made Simple www.OpenNebula.org | cmar...@opennebula.org | @OpenNebulahttp://twitter.com/opennebulacmar...@opennebula.org On Wed, Oct 2, 2013 at 6
Re: [one-users] DNS search domains and rc.local not being read in VMs
Hello Evgeniy, On Thu, Oct 3, 2013 at 9:15 AM, Evgeniy Suvorov eesuvo...@gmail.com wrote: Hello, `And define the $searchdomain variable in your network template` Where i can find network template? If you've already created your vnets inside OpenNebula you can update the variables using $ onevnet update vnet_id Define SEARCHDOMAIN=domain.tld and save it. Use SEARCHDOMAIN inside CONTEXT like SEARCHDOMAIN=$NIC[SEARCHDOMAIN, NETWORK=\private\] private being the name of your network the VM template uses. Afterwards modify or add new scripts to the contextualization packages. Enjoy your work. Good Will, 2013/9/26 Campbell, Bill bcampb...@axcess-financial.com There should be DNS scripts in the contextualization packages that provide some assignment of DNS. You should be able to slightly modify a script to pick up that variable you define, i.e. echo search $searchdomain /etc/resolv.conf And define the $searchdomain variable in your network template. Be advised, I only see this script on the Ubuntu context packages (not RHEL/Cent, but could be added either way) -- *From: *jerry steele jerry.ste...@cgg.com *To: *users@lists.opennebula.org *Sent: *Thursday, September 26, 2013 6:42:23 AM *Subject: *[one-users] DNS search domains and rc.local not being read in VMs Hello, I posted a few days ago about network configuration seemingly not being picked up properly – I think that was mainly due to my contextualisation packages being for the wrong version, and so the context CDROM device wasn’t being mounted. So now I have a separate,, but related issue. I need to be able to have DNS search domains in my /etc/resolv.conf. I tried to achieve this by adding a SEARCH attribute to the virtual network template, but it doesn’t seem to be picked up (looking in the context scripts, there’s nothing there to pick it up, so that’s kind of expected). I then tried to add it in /etc/rc.local, along with some lines to correctly set the hostname based on IP, but this doesn’t appear to be being picked up either. Could anyone tell me what might be going wrong here? Could it be the case that rc.local is not being run for some reason at boot? When I test the VM outside ONE, rc.local runs fine… Any help greatly appreciated. Thanks *__** **__** **__*** * * * * *Jerry Steele* IT Support Specialist Subsurface Imaging CGGVeritas Services (UK) Ltd Crompton Way Crawley W Sussex RH10 9QN T +44 (0)1293 683264 (Internal: 233264) M +44 (0)7920 237105 www.cgg.com *This email and any accompanying attachments are confidential. The information is intended solely for the use of the individual to whom it is addressed. Any review, disclosure, copying, distribution, or use of the email by others is strictly prohibited.* *This email and any accompanying attachments are confidential. If you received this email by mistake, please delete it from your system. Any review, disclosure, copying, distribution, or use of the email by others is strictly prohibited.* ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org *NOTICE: Protect the information in this message in accordance with the company's security policies. If you received this message in error, immediately notify the sender and destroy all copies.* ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Regards, Evgeniy. Tel.: +79060665574 ICQ: 380264507 ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Valentin Bud http://databus.pro | valen...@databus.pro ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] OpenNebula and DHCP Server
Hi Fazli, On Thu, Oct 3, 2013 at 12:22 PM, M Fazli A Jalaluddin fazli.jalalud...@gmail.com wrote: Hi Valentin, Your assumption is correct. My method is to use OpenNebula Virtual Router by refer to this page [1] and Openvswitch. I have installed Openvswitch in the host and I was able to deploy VM in isolated network. I try to deploy the VirtualRouter in a virtual network. In two virtual networks in fact, in the PUBNET which should be the 192.168 network from br0 on the nodes and frontend and PRIVNET in the Open vSwitch network. My problem is, I cannot ping it and cannot SSH into it. You should be able to connect to PUBNET's virtual IP Address from within the 192.168 network. Or you could add an internal port to Open vSwitch bridge and try to connect to PRIVNET's virtual IP Address of the VR. From the documentation, I understand that the VirtualRouter needs to be deploy as a VM in a specific virtual network and it will act as the DHCP for the VMs in the same virtual network. I also have included the example context in the VirtualRouter template. My VirtualRouter template: NIC=[NETWORK_ID=0] NIC=[NETWORK_ID=9,IP=10.0.10.1] INPUT=[BUS=usb,TYPE=tablet] MEMORY=512 OS=[ARCH=x86_64,BOOT=hd] GRAPHICS=[LISTEN=0.0.0.0,TYPE=SPICE] DISK=[IMAGE_ID=24] CPU=0.5 CONTEXT=[TARGET=hdb,NETWORK=YES,FORWARDING=8080:10.0.10.2:80 10.0.10.2:22,DHCP=YES,PRIVNET=$NETWORK[TEMPLATE, NETWORK=\ovs .10\],TEMPLATE=TEMPLATE,SSH_PUBLIC_KEY=ssh-rsa B3NzaC1yc2EDAQABAAABAQCk+MN96iAn4uXRieJqyJG7WY32zW0LTXJBdISdjDLlp8QgFrxOdi9Aw2+eu+QSbVHwBsqOTimpOuzknisOhD4RPCTCT7G2/xaEUxWg0AB3ySrMZC3Dv5AgBy0CikFk50/CbwBtMjj2pRINm0axfP+cUT/VBhJRAiwVe2wsIOL/t2PGOy0O8Q2zjG1XfCVZPCYPOxj9Jk0y8DoMHp0ILA6gM7hGN4CKAQiXnbjv8WD9uFpRr7eruXQUdMuPn2wnyDMcCnzUEMtPUoPIy6gyAer3biRyEQkAXNJ+R1WXvX6Ah848MTyoICoA7KKIm9e3xe/SXMJxxOPHZLWSJSIRmhcd hpc1@hpc-workstation1,PUBNET=$NETWORK[TEMPLATE, NETWORK=\Virtual Network .113\],DNS=8.8.8.8 8.8.4.4] This looks good and should work. May I know how to actually use the VirtualRouter? [1] http://opennebula.org/documentation:rel4.2:router Good Will, On Thu, Oct 3, 2013 at 3:56 PM, Valentin Bud valentin@gmail.comwrote: Hello Fazli, I will make some assumptions about your infrastructure and provide possible approach(es). * Your KVM nodes have a single Ethernet interface, eth0, connected in a switch and a router used as the default gateway for the 192.168.1/24 network, * Also the frontend is connected via the same switch with the rest of the nodes, * You have a br0 bridge with eth0 connected to it on each node and also the frontend, * Your frontend is also a node. If you have access to the router the simplest way would be to add an IP Address alias on the router interface as the default gateway for the new network. Configure a new network inside OpenNebula for that using the chosen subnet and the same bridge, br0. I don't know if you have any kind of security policies in place but be careful that in this way there is no Layer 2 separation and traffic between the two subnets is visible with tcpdump or other sniffers. The second approach I can think about is to have the frontend configured with the first IP Address from the new subnet on br0 and define a new network inside OpenNebula like the above. I don't know if this would work though.The NAT must be done for 10.100.0/24 over 192.168.1.X (the IP Address of frontend from 192.168.1/24 subnet). What I don't know is if iptables can MASQUERADE subnets on the same interface. Never tried it, it might work. Another approach that come to mind is to use the Virtual Router and define a new subnet on the same br0 bridge. The Virtual Router would have an interface connected to 192.168.1/24 network and one in the 10.100.0/24 one. Setup it up to have the first IP Address from the 10.100.0/24 network so it is the default gateway. The same applies, traffic over L2 is not separated in anyway. One more idea :-) would be to use Open vSwitch and GRE tunnels between the nodes. In this way you can use VLANs and transport over GRE between nodes. You can also setup IPSec encrypted GRE tunnels if you want security. It might be overkill but again it depends on your requirements. Another working setup I have done is to use tinc VPN [1] between nodes in switch mode and connect it to the Open vSwitch from each host as a port. This way traffic that travels between nodes is fully encrypted and you can use the same L2 network in a secure fashion. But maybe the best approach would be to have a second network card, eth1, in each node. Connect that second card in an Open vSwitch and use VLANs with the frontend being the router, or any other node for that matter. [1]: http://www.tinc-vpn.org/ Good Will, Valentin On Thu, Oct 03, 2013 at 09:18:41AM +0800, M Fazli A Jalaluddin wrote: Hello Valentin, My setup for OpenNebula is 1 Front-end and several KVM nodes. The front-end
Re: [one-users] ssh password less login not function
Hi Amier, On Thu, Oct 3, 2013 at 1:35 PM, Amier Anis myma...@gmail.com wrote: HI team, once opennebula-common create oneadmin, Is there any issue if i reset the oneadmin password? The OS one or the OpenNebula one via oneuser? No problem in either case just make sure to update ~/.one/one_auth if you change oneadmin's ONE password. Is there any require password-less from workers to management node? If management is also a node and you want live migration to work, yes, you have to provide that. Good Will, On Wed, Oct 2, 2013 at 5:02 PM, Amier Anis myma...@gmail.com wrote: I don't think that selinux is the issue as I can ssh with password-less without issue if no opennebula installed. I also has tried using setenforce 0 and still have same issue. (i try diffrent machine) [oneadmin@mnode lib]$ /usr/sbin/sestatus SELinux status: disabled I has tried both let the opennebula-common created the user or i manually created. same issue. This is how I install opennebula and the component:- yum -y install opennebula-server opennebula-sunstone opennebula-ozones opennebula-gate opennebula-flow opennebula-node-kvm Yes, i have all the file in the ~/.ssh [oneadmin@mnode .ssh]$ ls -l total 16 -rw--- 1 oneadmin oneadmin 406 Oct 2 10:19 authorized_keys -rw--- 1 oneadmin oneadmin 61 Oct 2 03:08 config -rw--- 1 oneadmin oneadmin 1675 Oct 2 10:19 id_rsa -rw--- 1 oneadmin oneadmin 406 Oct 2 10:19 id_rsa.pub I try to ssh -v node01 ... this error come out. however, this error not appear at the first place. -bash-4.1$ ssh -v 10.86.3.101 OpenSSH_5.3p1, OpenSSL 1.0.0-fips 29 Mar 2010 debug1: Reading configuration data /var/lib/one/.ssh/config debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to 10.86.3.101 [10.86.3.101] port 22. debug1: Connection established. debug1: identity file /var/lib/one/.ssh/identity type -1 debug1: identity file /var/lib/one/.ssh/id_rsa type 1 debug1: identity file /var/lib/one/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3 debug1: match: OpenSSH_5.3 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.3 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server-client aes128-ctr hmac-md5 none debug1: kex: client-server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(102410248192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '10.86.3.101' is known and matches the RSA host key. debug1: Found key in /var/lib/one/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password debug1: Next authentication method: gssapi-keyex debug1: No valid Key exchange context debug1: Next authentication method: gssapi-with-mic debug1: Unspecified GSS failure. Minor code may provide more information Bad format in credentials cache debug1: Unspecified GSS failure. Minor code may provide more information Bad format in credentials cache debug1: Unspecified GSS failure. Minor code may provide more information debug1: Unspecified GSS failure. Minor code may provide more information Bad format in credentials cache debug1: Next authentication method: publickey debug1: Trying private key: /var/lib/one/.ssh/identity debug1: Offering public key: /var/lib/one/.ssh/id_rsa debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password debug1: Trying private key: /var/lib/one/.ssh/id_dsa debug1: Next authentication method: password Which is better I export /var/lib/one to every workers node or manually export to each workers? Thanks you. Regards Best Wishes, *.: Amier Anis :.* Mobile: +6012-260-0819 On Wed, Oct 2, 2013 at 3:40 PM, Valentin Bud valentin@gmail.comwrote: Hello Amier, On Wed, Oct 2, 2013 at 10:27 AM, Amier Anis myma...@gmail.com wrote: Hi valentin, Yes, I'm using packaging from opennebula repo and no error during install either i created the oneadmin first before install or automatic created by the installer. yum -y install opennebula-server opennebula-sunstone opennebula-ozones opennebula-gate opennebula-flow opennebula-node-kvm The opennebula-common package provides the user oneadmin so no need to create it manually. The opennebula-common is required by opennebula-server so no need to install it manually. I also has remove selinux from the system. yum -y remove selinux-policy Have you rebooted you system
Re: [one-users] @vm contextualization
Hello anagha, On Tue, Oct 1, 2013 at 2:28 PM, anagha b banag...@gmail.com wrote: Hi i am using opennbula-3.8.3 with kvm and want to contextualize vm using susntone . You have to contextualize the machine before using it. That means you have to install the contextualization packages. Please do yourself a favor and read the contextualization overview [1]. Also read the fine docs on how to contextualize the VM for 3.8 [2]. plz help. Is it necessary to create template to contextualize vm created using NAT? Contextualization is simply some scripts that are run when the VM is booted. Those scripts come with the contextualization packages, which are available as DEB and RPMs so they work on RHEL and Debian based operating systems. The scripts activate swap, configure network interfaces based on interface MAC address and some other stuff. or contextulization is necessary for bridged mode? Contextualization is needed if you want your machine to autoconfigure the IP address and have the OpenNebula users' SSH public to be set as root's ~/.ssh/authorized_keys so you are able to SSH securely to it. I want communication between vms of different vlan. If you want communication between VLANs you can either use the Virtual Router [3] or you can provide routing using your host(s). It is up to you. Remember that routing is a function provided by a networking equipment, in this case the Virtual Router or the host. [1]: http://opennebula.org/documentation:archives:rel3.8:context_overview [2]: http://opennebula.org/documentation:archives:rel3.8:cong [3]: http://opennebula.org/documentation:archives:rel3.8:router Please read the docs and come back if you need more help. Next time provide more information about your setup so we can help you further. Info like network driver, OS you are using, OS you want to use as your VMs and such Good Will, -- Valentin Bud http://databus.pro | valen...@databus.pro ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] ssh password less login not function
Hello Amier, On Wed, Oct 2, 2013 at 9:16 AM, Amier Anis myma...@gmail.com wrote: *Hi Guys,* I'm having issue with ssh password less login not function correctly. It's work with fresh install CentOS 6.4 before install opennebula. Once opennebula started, it doesn't work any more. The workers node can login with less password without any issue but management node can't login to worker nodes. I see you're using CentOS as OS. Have you installed OpenNebula from packages [1]? Have you configured SSH as pointed in [1]. I mean the ~/.ssh/config part. Another important aspect is SELINUX. Is it on or off? If it on check the /var/lib/one/.ssh context, it should have ssh_home_t as label. You can accomplish that using chcon -R -t ssh_home_t /var/lib/one/.ssh as either oneadmin or root. At first attempt, I install opennebula then setup the ssh-keygen to oneadmin (created during installation) and I also hv tried to create oneadmin first then install opennebula but both failed If the mgmt server can ssh with password less to workers then the mgmt server can't ssh to itself as the mgmt server also have the vm. I suggest you install OpenNebula from packages and work your way up from there. Don't forget to check the SELINUX context of oneadmin's ~/.ssh and either SSH to hosts in advance or configure SSH via ~/.ssh/config to allow connections without StrictHostKeyChecking. *My Setup* 1. I only export and share /var/lib/one/datastores to every workers 2. authorized_keys has been export to every wokers vice versa. 3. declared every hostname in /etc/hosts Is there any issue or things that i need to look into it. Thanks you. If you need more help in the future be sure to come back and ask for it :). Enjoy. *.: Amier Anis :.* Mobile: +6012-260-0819 [1]: http://opennebula.org/documentation:rel4.2:ignc#centos_platform_notes Good Will, -- Valentin Bud http://databus.pro | valen...@databus.pro ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Open Nebula Conference - Slideshows
Hello Duverne, The Netways guys gathered the presentations from all of us and I think they'll make them available somewhere. In the mean time my presentation about OpenNebula and Saltstack is available on SlideShare [1]. The states used and the logic behind the deployment will be available soon, just have to polish them a little bit :-). [1]: http://www.slideshare.net/databuspro/open-nebula-andsaltstackopennebulaconf2013 Good Will, On Tue, Oct 1, 2013 at 10:57 PM, Duverne, Cyrille cyrille.duve...@euranova.eu wrote: Hello guys, Would that be possible to make all the presentations of the ONE conf available somewhere ? Thanks in advance Cyrille ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Valentin Bud http://databus.pro | valen...@databus.pro ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] ssh password less login not function
Hello Amier, On Wed, Oct 2, 2013 at 10:27 AM, Amier Anis myma...@gmail.com wrote: Hi valentin, Yes, I'm using packaging from opennebula repo and no error during install either i created the oneadmin first before install or automatic created by the installer. yum -y install opennebula-server opennebula-sunstone opennebula-ozones opennebula-gate opennebula-flow opennebula-node-kvm The opennebula-common package provides the user oneadmin so no need to create it manually. The opennebula-common is required by opennebula-server so no need to install it manually. I also has remove selinux from the system. yum -y remove selinux-policy Have you rebooted you system afterwards? Yes, I already configure ~/.ssh/config [oneadmin@mnode]$ vi ~/.ssh/config Host * StrictHostKeyChecking no UserKnownHostsFile /dev/null ControlMaster auto ControlPath /tmp/%r@%h:%p This looks OK. I suggest you remove the packages yum -y remove opennebula-\* and remove the oneadmin user, rm -rf /var/lib/one, reboot the machine and start from scratch. Let the packages deal with user creation. After that on mnode you should have the oneadmin public/private keys in ~/.ssh and the public key in ~/.ssh/authorized_keys. You can config ssh and try to ssh localhost. WARNING: don't remove the /var/lib/one directory if you have precious data in there. If that doesn't work config sshd to LogLevel DEBUG3 and watch what the logs say. Also take a look at /var/log/audit/audit.log. It might shed some light. Good Will, Thanks you. *.: Amier Anis :.* Mobile: +6012-260-0819 On Wed, Oct 2, 2013 at 2:58 PM, Valentin Bud valentin@gmail.comwrote: Hello Amier, On Wed, Oct 2, 2013 at 9:16 AM, Amier Anis myma...@gmail.com wrote: *Hi Guys,* I'm having issue with ssh password less login not function correctly. It's work with fresh install CentOS 6.4 before install opennebula. Once opennebula started, it doesn't work any more. The workers node can login with less password without any issue but management node can't login to worker nodes. I see you're using CentOS as OS. Have you installed OpenNebula from packages [1]? Have you configured SSH as pointed in [1]. I mean the ~/.ssh/config part. Another important aspect is SELINUX. Is it on or off? If it on check the /var/lib/one/.ssh context, it should have ssh_home_t as label. You can accomplish that using chcon -R -t ssh_home_t /var/lib/one/.ssh as either oneadmin or root. At first attempt, I install opennebula then setup the ssh-keygen to oneadmin (created during installation) and I also hv tried to create oneadmin first then install opennebula but both failed If the mgmt server can ssh with password less to workers then the mgmt server can't ssh to itself as the mgmt server also have the vm. I suggest you install OpenNebula from packages and work your way up from there. Don't forget to check the SELINUX context of oneadmin's ~/.ssh and either SSH to hosts in advance or configure SSH via ~/.ssh/config to allow connections without StrictHostKeyChecking. *My Setup* 1. I only export and share /var/lib/one/datastores to every workers 2. authorized_keys has been export to every wokers vice versa. 3. declared every hostname in /etc/hosts Is there any issue or things that i need to look into it. Thanks you. If you need more help in the future be sure to come back and ask for it :). Enjoy. *.: Amier Anis :.* Mobile: +6012-260-0819 [1]: http://opennebula.org/documentation:rel4.2:ignc#centos_platform_notes Good Will, -- Valentin Bud http://databus.pro | valen...@databus.pro -- Valentin Bud http://databus.pro | valen...@databus.pro ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] OpenNebula and DHCP Server
Hello Fazli, The Virtual Router documentation [1] is definitely a good place to start. On Wed, Oct 2, 2013 at 1:57 PM, M Fazli A Jalaluddin fazli.jalalud...@gmail.com wrote: Hi, Is there any tutorial on how to use the VirtualRouter? I have download the image from Marketplace and Deploy a VM out of it. Then what should I do? My concern is that the Multiple VM will be able to be assigned a private IP address (at the same time connect to the internet) while the KVM host is using public IP address. I don't really understand your concern. Could you be more specific? Yes, every VM will get a private IP address from the Router in case you connect it to the private network. If you connect the VM to the public network too you'd have to setup the IP address on the VM. If context package is installed in the VM it'll autoconfigure the public IP also. [1]: http://opennebula.org/documentation:rel4.2:router Good Will, Thank you On Wed, Oct 2, 2013 at 4:26 PM, Carlos Martín Sánchez cmar...@opennebula.org wrote: Hi, On Wed, Oct 2, 2013 at 6:56 AM, M Fazli A Jalaluddin fazli.jalalud...@gmail.com wrote: Hi, May I know if the Virtual Router provide NAT? Yes, look for the Full Router section in the documentation: http://opennebula.org/documentation:rel4.2:router PS: Please reply also to the mailing list Regards. -- Carlos Martín, MSc Project Engineer OpenNebula - Flexible Enterprise Cloud Made Simple www.OpenNebula.org | cmar...@opennebula.org | @OpenNebulahttp://twitter.com/opennebulacmar...@opennebula.org On Wed, Oct 2, 2013 at 6:56 AM, M Fazli A Jalaluddin fazli.jalalud...@gmail.com wrote: Hi, May I know if the Virtual Router provide NAT? Thank you On Thu, Sep 5, 2013 at 5:29 PM, Carlos Martín Sánchez cmar...@opennebula.org wrote: Hi, Actually, we do provide a Virtual Router appliance that contains a DHCP server. It knows the correct IP assigned by OpenNebula to each MAC. See http://opennebula.org/documentation:rel4.2:router Regards -- Join us at OpenNebulaConf2013 http://opennebulaconf.com in Berlin, 24-26 September, 2013 -- Carlos Martín, MSc Project Engineer OpenNebula - The Open-source Solution for Data Center Virtualization www.OpenNebula.org | cmar...@opennebula.org | @OpenNebulahttp://twitter.com/opennebulacmar...@opennebula.org On Thu, Sep 5, 2013 at 8:55 AM, Ionut Popovici io...@hackaserver.comwrote: No opennebula don't provide DHCP , you could use vlans to brake the network, and u can use contextualization to get the ip for virtual machines, if u use bridge mode is u should make rules in iptables(ebtables) for udp dst port 67 and allow only response from your DHCP server. Chears. On 9/5/2013 9:49 AM, Mohammad Fazli Ahmat Jalaluddin wrote: Hi guys, I just want to ask few questions. Does OpenNebula act as a DHCP Server and give IP address to the VM if it is not contextualized in the first place? When the VM is deploy (without context), e.g Ubuntu server default network configuration is using DHCP, and thus the IP for the VM is different with the one that OpenNebula uses from the vnet lease. Is the IP address in the VM is given by OpenNebula (act as the DHCP server) or given by our network existing DHCP server? The reason I'm asking is because our network is poisoned since there are 2 DHCP server. BTW, our OpenNebula configuration for the network is using dummy and using bridge in the frontend Thank you very much. Regards, Fazli ___ Users mailing listUsers@lists.opennebula.orghttp://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Valentin Bud http://databus.pro | valen...@databus.pro ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] how to find image filename in datastores folder
Hey there Lorenzo, oneimage list will give the ID of the images and the name among other things. oneimage show ID | NAME will give you information about the said image. The SOURCE is what you are looking for. oneimage show 100 | grep SOURCE Nice to see you around :-). Good Will, On Wed, Oct 2, 2013 at 4:02 PM, Lorenzo Faleschini lorenzo.falesch...@nordestsystems.com wrote: Hi, I'm quite a noob here, so maybe my question can sound stupid. I'm currently using CloudWeavers, actually running ONE 4.0 and MooseFS for storage of /var/lib/one (to take snapshots to use as backup / DR tool) It is to be said I manage cats and not kettle (to cite @cdaffara at #OpenNebulaConf) so I use ONE to manage small infrastructures where almost all server images are persistent and I need to take care of (I dont just fire up 100 vms for sci-calc but I manage few critical vms taking advantage of all the features ONE + MFS can give me for reliability and flexybility). After this preamble I tell you what my problem is: I didnt find an easy way to know which filename an image has in /var/lib/one/datastores/1. For now I look for it in the folder at time of creation and I take note (or I search in the LOG of the vm). But ... it's there a easier way to get FILENAME out of IMAGE_ID? thanks Lorenzo __**_ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/**listinfo.cgi/users-opennebula.**orghttp://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Valentin Bud http://databus.pro | valen...@databus.pro ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Problem with Openvswitch on OpenNebula3.8
Hello, Is your second host configured to use the ovswitch network driver? Is the ovs bridge created on the second machine with the name used in the vnet you've defined? Is you sudo configured to allow oneadmin to issue ovs-* commands without password? On Wed, Oct 2, 2013 at 5:38 PM, 木易@'4武 yangz...@qq.com wrote: Hi, I’m trying to Configure OVS network in OpenNebula3.8,and has been succeed it in the frontend host(frontendhost-end together).But When I build a VM in another host,it can't add a ovs bridge. In normal, it will use ovs cmd to build it. The VM log is here. “Tue Apr 23 06:37:08 2013 [DiM][I]: New VM state is DONE. Mon Sep 30 11:11:20 2013 [DiM][I]: New VM state is ACTIVE. Mon Sep 30 11:11:20 2013 [LCM][I]: New VM state is PROLOG. Mon Sep 30 11:11:20 2013 [VM][I]: Virtual Machine has no context Mon Sep 30 11:11:21 2013 [TM][I]: clone: Cloning /var/lib/one/datastores/1/af1f67b5da1befd1c585413de8aa17ea in host_old:/var/lib/one//datastores/0/80/disk.0 Mon Sep 30 11:11:21 2013 [TM][I]: ExitCode: 0 Mon Sep 30 11:11:21 2013 [LCM][I]: New VM state is BOOT Mon Sep 30 11:11:21 2013 [VMM][I]: Generating deployment file: /var/lib/one/80/deployment.0 Mon Sep 30 11:11:21 2013 [VMM][I]: ExitCode: 0 Mon Sep 30 11:11:21 2013 [VMM][I]: Successfully execute network driver operation: pre. Mon Sep 30 11:11:22 2013 [VMM][I]: ExitCode: 0 Mon Sep 30 11:11:22 2013 [VMM][I]: Successfully execute virtualization driver operation: deploy. Mon Sep 30 11:11:22 2013 [VMM][I]: post: Executed sudo /usr/bin/ovs-vsctl set Port vnet9 tag=302. Mon Sep 30 11:11:22 2013 [VMM][I]: post: Executed sudo /usr/bin/ovs-ofctl add-flow ovsbr0 in_port=249,dl_src=02:00:c0:a8:b6:7f,priority=4,actions=normal Mon Sep 30 11:11:22 2013 [VMM][I]: post: Executed sudo /usr/bin/ovs-ofctl add-flow ovsbr0 in_port=250,dl_src=02:00:63:09:09:7b,priority=4,actions=normal. Mon Sep 30 11:11:22 2013 [VMM][I]: post: Executed sudo /usr/bin/ovs-ofctl add-flow ovsbr0 in_port=250,priority=39000,actions=drop. Mon Sep 30 11:11:22 2013 [VMM][I]: post: Executed sudo /usr/bin/ovs-vsctl set Port vnet11 tag=150. Mon Sep 30 11:11:22 2013 [VMM][I]: post: Executed sudo /usr/bin/ovs-ofctl add-flow ovsbr0 in_port=251,dl_src=02:00:c0:a8:b4:4e,priority=4,actions=normal. Mon Sep 30 11:11:22 2013 [VMM][I]: post: Executed sudo /usr/bin/ovs-ofctl add-flow ovsbr0 in_port=251,priority=39000,actions=drop. Mon Sep 30 11:11:22 2013 [VMM][I]: ExitCode: 0 Mon Sep 30 11:11:22 2013 [VMM][I]: Successfully execute network driver operation: post. Mon Sep 30 11:11:22 2013 [LCM][I]: New VM state is RUNNING . ” # ovs-vsctl show|less Bridge ovsbr0 Port vnet9 tag: 302 Interface vnet9 But when I build a VM in another host,it just build a VM, nothing for network. Wed Oct 2 20:46:50 2013 [DiM][I]: New VM state is ACTIVE. Wed Oct 2 20:46:50 2013 [LCM][I]: New VM state is PROLOG. Wed Oct 2 20:46:50 2013 [VM][I]: Virtual Machine has no context Wed Oct 2 20:46:52 2013 [TM][I]: clone: Cloning host_old:/var/lib/one/datastores/1/af1f67b5da1befd1c585413de8aa17ea in /var/lib/one/datastores/0/82/disk.0 Wed Oct 2 20:46:52 2013 [TM][I]: ExitCode: 0 Wed Oct 2 20:46:52 2013 [LCM][I]: New VM state is BOOT Wed Oct 2 20:46:52 2013 [VMM][I]: Generating deployment file: /var/lib/one/82/deployment.0 Wed Oct 2 20:46:52 2013 [VMM][I]: ExitCode: 0 Wed Oct 2 20:46:52 2013 [VMM][I]: Successfully execute network driver operation: pre. Wed Oct 2 20:46:53 2013 [VMM][I]: ExitCode: 0 Wed Oct 2 20:46:53 2013 [VMM][I]: Successfully execute virtualization driver operation: deploy. Wed Oct 2 20:46:53 2013 [VMM][I]: ExitCode: 0 Wed Oct 2 20:46:53 2013 [VMM][I]: Successfully execute network driver operation: post. Wed Oct 2 20:46:53 2013 [LCM][I]: New VM state is RUNNING Can anyboye help me to solve this problem? It looks like the network driver pre operation and also post operation executed successfully. I tend to think that your second host is not configured to use the ovswitch network driver. Can you check that? Good Will, -- Valentin Bud http://databus.pro | valen...@databus.pro ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] 回复: Problem with Openvswitch on OpenNebula3.8
Hi there, You should create the second host on the frontend with something like the following: # onehost create second_host -i kvm -v kvm -n ovswitch Have you done that already? Then on the second machine you must have an OVS bridge created with the name as specified in the vnet you have inside OpenNebula. Good Will, On Wed, Oct 2, 2013 at 6:39 PM, 木易@'4武 yangz...@qq.com wrote: Thanks for your reply, 1、Should I configure the driver in the second host? The network drive only be set in the frontend. I had configure lieted bellow environment in the second host, ruby environment; kvm environment; oneadmin user with no password to use sudo; no password access frontend each other; ovs 1.9.0 version with brcompat; 2、Yes, I can make ovs bridge create on the second machine, Using no-password ssh. Normally,the network driver dedined in the frontend,and frontend use ssh to execute the ovs command through ssh. but, when I build a VM in the second host frontend.It did nothing about it. 原始邮件 -- *发件人:* Valentin Bud;valentin@gmail.com; *发送时间:* 2013年10月2日(星期三) 晚上10:57 *收件人:* 木易@'4武 yangz...@qq.com; ** *抄送:* usersusers@lists.opennebula.org; ** *主题:* Re: [one-users] Problem with Openvswitch on OpenNebula3.8 Hello, Is your second host configured to use the ovswitch network driver? Is the ovs bridge created on the second machine with the name used in the vnet you've defined? Is you sudo configured to allow oneadmin to issue ovs-* commands without password? On Wed, Oct 2, 2013 at 5:38 PM, 木易@'4武 yangz...@qq.com wrote: Hi, I’m trying to Configure OVS network in OpenNebula3.8,and has been succeed it in the frontend host(frontendhost-end together).But When I build a VM in another host,it can't add a ovs bridge. In normal, it will use ovs cmd to build it. The VM log is here. “Tue Apr 23 06:37:08 2013 [DiM][I]: New VM state is DONE. Mon Sep 30 11:11:20 2013 [DiM][I]: New VM state is ACTIVE. Mon Sep 30 11:11:20 2013 [LCM][I]: New VM state is PROLOG. Mon Sep 30 11:11:20 2013 [VM][I]: Virtual Machine has no context Mon Sep 30 11:11:21 2013 [TM][I]: clone: Cloning /var/lib/one/datastores/1/af1f67b5da1befd1c585413de8aa17ea in host_old:/var/lib/one//datastores/0/80/disk.0 Mon Sep 30 11:11:21 2013 [TM][I]: ExitCode: 0 Mon Sep 30 11:11:21 2013 [LCM][I]: New VM state is BOOT Mon Sep 30 11:11:21 2013 [VMM][I]: Generating deployment file: /var/lib/one/80/deployment.0 Mon Sep 30 11:11:21 2013 [VMM][I]: ExitCode: 0 Mon Sep 30 11:11:21 2013 [VMM][I]: Successfully execute network driver operation: pre. Mon Sep 30 11:11:22 2013 [VMM][I]: ExitCode: 0 Mon Sep 30 11:11:22 2013 [VMM][I]: Successfully execute virtualization driver operation: deploy. Mon Sep 30 11:11:22 2013 [VMM][I]: post: Executed sudo /usr/bin/ovs-vsctl set Port vnet9 tag=302. Mon Sep 30 11:11:22 2013 [VMM][I]: post: Executed sudo /usr/bin/ovs-ofctl add-flow ovsbr0 in_port=249,dl_src=02:00:c0:a8:b6:7f,priority=4,actions=normal Mon Sep 30 11:11:22 2013 [VMM][I]: post: Executed sudo /usr/bin/ovs-ofctl add-flow ovsbr0 in_port=250,dl_src=02:00:63:09:09:7b,priority=4,actions=normal. Mon Sep 30 11:11:22 2013 [VMM][I]: post: Executed sudo /usr/bin/ovs-ofctl add-flow ovsbr0 in_port=250,priority=39000,actions=drop. Mon Sep 30 11:11:22 2013 [VMM][I]: post: Executed sudo /usr/bin/ovs-vsctl set Port vnet11 tag=150. Mon Sep 30 11:11:22 2013 [VMM][I]: post: Executed sudo /usr/bin/ovs-ofctl add-flow ovsbr0 in_port=251,dl_src=02:00:c0:a8:b4:4e,priority=4,actions=normal. Mon Sep 30 11:11:22 2013 [VMM][I]: post: Executed sudo /usr/bin/ovs-ofctl add-flow ovsbr0 in_port=251,priority=39000,actions=drop. Mon Sep 30 11:11:22 2013 [VMM][I]: ExitCode: 0 Mon Sep 30 11:11:22 2013 [VMM][I]: Successfully execute network driver operation: post. Mon Sep 30 11:11:22 2013 [LCM][I]: New VM state is RUNNING . ” # ovs-vsctl show|less Bridge ovsbr0 Port vnet9 tag: 302 Interface vnet9 But when I build a VM in another host,it just build a VM, nothing for network. Wed Oct 2 20:46:50 2013 [DiM][I]: New VM state is ACTIVE. Wed Oct 2 20:46:50 2013 [LCM][I]: New VM state is PROLOG. Wed Oct 2 20:46:50 2013 [VM][I]: Virtual Machine has no context Wed Oct 2 20:46:52 2013 [TM][I]: clone: Cloning host_old:/var/lib/one/datastores/1/af1f67b5da1befd1c585413de8aa17ea in /var/lib/one/datastores/0/82/disk.0 Wed Oct 2 20:46:52 2013 [TM][I]: ExitCode: 0 Wed Oct 2 20:46:52 2013 [LCM][I]: New VM state is BOOT Wed Oct 2 20:46:52 2013 [VMM][I]: Generating deployment file: /var/lib/one/82/deployment.0 Wed Oct 2 20:46:52 2013 [VMM][I]: ExitCode: 0 Wed Oct 2 20:46:52 2013 [VMM][I]: Successfully execute network driver operation: pre. Wed Oct 2 20:46:53 2013 [VMM][I]: ExitCode: 0 Wed Oct 2 20:46:53 2013 [VMM][I
Re: [one-users] @vm contextulization
Hello anagha b, What version of OpenNebula are you running and what OS do you want to contextualize? If you are using CentOS, Debian or Ubuntu you just install the opennebula-context/one-context package, save the image and use it afterwards. You can read how to prepare the virtual machine image in the documentation [1]. It takes 10 minutes to get it up and running. If you are using other OS for your VMs you could provide the IP configuration from a DHCP Server or you could use the Virtual Router [2] for that. [1]: http://opennebula.org/documentation:rel4.2:context_overview [2]: http://opennebula.org/documentation:rel4.2:router On Thu, Sep 26, 2013 at 8:53 AM, anagha b banag...@gmail.com wrote: can anybody tell how to contextualizae vm using sunstone. just setting the interface file in vm isnot sufficient to have static ip to vm? ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Valentin Bud http://databus.pro | valen...@databus.pro ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] virtio-blk-data-plane and x-data-plane=on
Hi Erico, This is the first time I hear about virtio-blk-data-plane. Thank you for the info, looks like this feature brings notable IO improvements. You can try to use the RAW Section [1] to pass special attributes to the underlying hypervisor. I have found a blog post [2] in which there is a method to enable virtio-blk-data-plane using the libvirt XML. The RAW section DATA gets passed to libvirt in XML format. I think the following could work: RAW = [ TYPE=kvm, DATA=qemu:commandlineqemu:arg value='-set'/qemu:arg value='device.virtio-disk0.scsi=off'//qemu:commandline!-- config-wce=off is not needed in RHEL 6.4 --qemu:commandlineqemu:arg value='-set'/qemu:arg value='device.virtio-disk0.config-wce=off'//qemu:commandlineqemu:commandlineqemu:arg value='-set'/qemu:arg value='device.virtio-disk0.x-data-plane=on'/qemu:commandline ] I don't have a test machine around and I would to hear back from you if it works or not. [1]: http://opennebula.org/documentation:rel4.2:template#raw_section [2]: http://blog.vmsplice.net/2013/03/new-in-qemu-14-high-performance-virtio.html Health and Goodwill, On Sat, Aug 31, 2013 at 11:01 PM, Erico Augusto Cavalcanti Guedes e...@cin.ufpe.br wrote: Hello, on [1], page 10, section 2.3 - KVM Configuration: To achieve the best possible I/O rates for the KVM guest, the virtio-blk-data-plane feature was enabled for each LUN (a disk or partition) that was passed from the host to the guest. To enable virtio-blk-data-plane for a LUN being passed to the guest, the x-data-plane=on option was added for that LUN in the qemu-kvm command line used to set up the guest. For example: /usr/libexec/qemu-kvm -drive if=none,id=drive0,cache=none,aio=native,format=raw,file=disk or partition -device virtio-blk-pci,drive=drive0,scsi=off,x-data-plane=on I'll be grateful if you can help me with the following question: How to customize -device virtio-blk-pci parameter during OpenNebula VM initialization to insert x-data-plane=on on it? My VM Template: CONTEXT=[NETWORK=YES,SSH_PUBLIC_KEY=$USER[SSH_PUBLIC_KEY]] CPU=1 DISK=[AIO=native,BUS=virtio,CACHE=none,DEV_PREFIX=vd,FORMAT=raw,IMAGE_ID=1] GRAPHICS=[LISTEN=0.0.0.0,TYPE=VNC] MEMORY=256 NIC=[NETWORK_ID=0] OS=[ARCH=i686,BOOT=hd] KVM process on node: /usr/bin/kvm -S -M pc-i440fx-1.6 -cpu qemu32 -enable-kvm -m 256 -smp 1,sockets=1,cores=1,threads=1 -name one-27 -uuid c014337c-5255-e983-862e-b744f889aa49 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-27.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown-boot c -drive file=/srv/cloud/one/var//datastores/0/27/disk.0,if=none,id=drive-virtio-disk0,format=raw,cache=none -device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 -drive file=/srv/cloud/one/var//datastores/0/27/disk.1,if=none,media=cdrom,id=drive-ide0-0-0,readonly=on,format=raw -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev tap,fd=22,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=02:00:c0:a8:0f:6e,bus=pci.0,addr=0x3 -usb -vnc 0.0.0.0:27 -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 I'm running ONE 4.2 on Debian 7.1 x86_64, kernel 3.2.0-4-amd64, with customized qemu-1.6(compiled by myself to support virtio-blk-data-plane) to enable virtio-blk-data-plane, with Debian 7.1 i386 VMs. Thanks in advance, Erico. [1] ftp://public.dhe.ibm.com/linux/pdfs/KVM_Virtualized_IO_Performance_Paper_v2.pdf ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Valentin Bud http://databus.pro | valen...@databus.pro ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Templates Path
Hi Umar, On Thu, Jul 04, 2013 at 02:13:39AM +0500, Umar Draz wrote: Hi I just created a New Template through Openneubla web interface. now I want to know where that templates saves on server, I want to check the content of that template. So i can create more template using that. The template is stored in the database, either SQLite or MySQL. This depends on your installation. In order to view the template's content you can use either the CLI or Sunstone, whichever suits you best. From the CLI: $ onetemplate show name_of_template | id_of_template From Sunstone: Select the template and hit More on the top right above the templates list. There you can see the content of the template. Templates can be cloned either from the CLI or from Sunstone. From the CLI: $ onetemplate clone id_of_template new_template_name and updated using your preferred EDITOR with $ onetemplate update id_of_template | template_name From Sunstone: Select a template and hit More and Clone. Enjoy :-). Greetings, -- Valentin Bud http://databus.pro | valen...@databus.pro ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] @virtualbridge
Hi, Change the BRIDGE attribute from the network defined in OpenNebula from virbr0 to virbr1. As far as I know the above operation requires you to remove the network and recreate it. Others may know better. Cheers and Goodwill, On Mon, Jun 17, 2013 at 8:32 AM, anagha b banag...@gmail.com wrote: Hi, I installed openvswitch on host so removed existing bridge virbr0 . but i removed openvswitch and removed the bridge created with openvswitch. using bridge utils again created new bridge which is virbr1 now opennebula frontend searching for virbr0 for creating domain for vm and vm is in failed state . What steps to be performed that frontend will consider this virbr1 bridge instead of old virbr0. help. Thanks n Regards. ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Valentin Bud http://databus.pro | valen...@databus.pro ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] iptables commands to run for externally access the applications run in the VM in a virtual LAN which is set up in a dedicated root server
Hi Qiubo, For directing traffic for a specific application (port) to a virtual machine I use the following: root at host # cat /etc/network/iptables *nat :PREROUTING ACCEPT :POSTROUTING ACCEPT :OUTPUT ACCEPT # Direct HTTP(S) traffic to 192.168.120.100 -A PREROUTING -s 0.0.0.0/0 -d 172.20.85.28/32 -p tcp --dport 80 -j DNAT --to-destination 192.168.120.100 -A PREROUTING -s 0.0.0.0/0 -d 172.20.85.28/32 -p tcp --dport 443 -j DNAT --to-destination 192.168.120.100 COMMIT *filter :INPUT DROP :FORWARD DROP :OUTPUT ACCEPT :PUBLIC - # INPUT CHAIN -A INPUT -i lo -j ACCEPT -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -i eth0 -j PUBLIC # FORWARD CHAIN -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT -A FORWARD -i lo -j ACCEPT -A FORWARD -i eth0 -j PUBLIC # OUTPUT CHAIN -A OUTPUT -d 224.0.0.0/4 -o eth0 -j DROP # PUBLIC CHAIN -A PUBLIC -s 0.0.0.0/0 -d 192.168.120.100/32 -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT -A PUBLIC -s 0.0.0.0/0 -d 192.168.120.100/32 -p tcp -m state --state NEW -m tcp --dport 443 -j ACCEPT COMMIT The above permits all RELATED/ESTABLISHED traffic to pass through the firewall on both INPUT and FORWARD chains. It also permits traffic to pass freely on the lo (loopback) interface. All traffic coming in on eth0 is directed to the PUBLIC CHAIN. On the PUBLIC CHAIN I allow traffic to destination ports 80 and 443 to the inside (LAN) IP address. To start the iptables on boot I have the following in `/etc/network/interfaces`: # Loopback device: auto lo iface lo inet loopback ### Start and configure iptables and ip6tables at startup up iptables-restore /etc/network/iptables up ip6tables-restore /etc/network/ip6tables WARNING: The above is only an example and should not be blindly copied and expected to work. In fact it will not work, it will block your access to the machine (host) in question and allow only HTTP(S) traffic to the specified VM. Cheers and Goodwill, On Fri, May 24, 2013 at 4:19 AM, Qiubo Su (David Su) qiub...@gmail.comwrote: Dear OpenNebula Community, I want to install/configure a virtual LAN (192.168.120.0/24) in one dedicated root server in data center. eth0 is the physical interface of this root server. virbr0 is the default virtual LAN switch provided by libvirtd (virbr0-nic is the correspondent virtual interface of virbr0). the virtual network switch is in NAT mode. a VM in this virtual LAN, and some applications runs in this VM. for externally accessing the applications (e.g. web server) run in the VM, need to use iptables command similar as below: LAN=virbr0 WAN=eth0 LAN_IP=192.168.120.1 WAN_IP=172.20.85.28 VM_IP=192.168.120.100 iptables -t nat -A PREROUTING -p tcp -d $WAN_IP --dport 80 -j DNAT --to-destination $VM_IP iptables -t nat -A POSTROUTING -p tcp -d $LAN_IP --dport 80 -j SNAT --to-source $VM_IP iptables -t nat -A OUTPUT -p tcp -d $WAN_IP --dport 80 -j DNAT --to-destination $VM_IP iptables -i FORWARD -p tcp -m tcp --in-interface $WAN --out-interface $LAN -d $VM_IP --dport 80 --j ACCEPT however after running the .sh script with the above iptables command, get below error iptables v1.4.12: multiple -i flags not allowed Try `iptables -h' or 'iptables --help' for more information. run the .sh script after commenting out the command iptables -i FORWARD -p tcp -m tcp --in-interface $WAN --out-interface $LAN -d $VM_IP --dport 80 --j ACCEPT, there is no error in the output. but only can locally access the VM web server with the registered domain name (i.e. can locally access the website hosted in the VM web server, within the virtual LAN scope), but can't externally access the website hosted in this VM web server. there may be some problem with this iptables .sh script. it is much appreciated if anyone can assist with this. thanks, Q.S. ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- w: http://databus.ro/blog in: http://www.linkedin.com/pub/valentin-bud/9/881/830 t: https://twitter.com/valentinbud ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Reference CONTEXT variables within CONTEXT
Hi, On Sat, May 18, 2013 at 12:05 PM, Valentin Bud valentin@gmail.comwrote: Hello Carlos, I would gladly open a request if you and others from the community think this is useful. I see this topic as a Request For Comments :-). For me the NAME of the VM would prove useful to be available within CONTEXT. The NAME being the `onetemplate instantiate 8 --name NAME` This way I could have the following template which the users could use: CONTEXT=[ CONTEXT_FILES_LOCATION=$CONTEXT_FILES_LOCATION, DOMAIN=$DOMAIN, FILES=$CONTEXT_FILES_LOCATION/$FQDN/ssh_host_rsa_key $CONTEXT_FILES_LOCATION/$FQDN/ssh_host_rsa_key.pub, FQDN=$HOSTNAME.$DOMAIN., HOSTNAME=$HOSTNAME ] CONTEXT_FILES_LOCATION=/var/lib/cloud/context CPU=1 DISK=[ IMAGE=debian6-stable ] DISK=[ IMAGE=vdb.$HOSTNAME.$DOMAIN ] DOMAIN=domain.tld GRAPHICS=[ LISTEN=0.0.0.0, TYPE=vnc ] ** HOSTNAME=$NAME ** MEMORY=2048 NIC=[ NETWORK=dev.domain.tld ] OS=[ ARCH=x86_64 ] VCPU=4 From Sunstone, at instantiation time you are asked to give the VM a name. That NAME could be used in the template also. Maybe NAME is not the proper way for this variable. I meant that maybe NAME is not the proper name for this variable. Another question would be: could I use a variable, DB_NAME for example, in the template and reference that from within IMAGE? I will try this today and come back with the results. DB_NAME=dbtest HOSTNAME=$DB_NAME DISK=[ IMAGE=vdb.$DB_NAME.domain.tld ] I am trying to simplify the instantiation process for the users by providing the template and creating the second disk on the fly. In this way the user would only change the DB_NAME to a given name and everything would happen automagically behind the scenes. For now, the user needs to clone a persistent image for the second disk, change the $HOSTNAME and the name of the second disk to the one previously cloned. After this two steps the user can instantiate the new VM. This process is prone to human errors. If I could save the second disk and reference the DB_NAME from within IMAGE I would restrict all the attributes from the template besides DB_NAME and the process would be less prone to errors. Does the above make any sense? The above applies to my use case. In my case each project I run on top of OpenNebula is a (sub)domain. The above would ease the process of instantiating a new VM. Maybe I could even use a hook to create the second image on the fly at CREATE time, though I think OpenNebula would complain that the disk is missing. Haven't tested this yet. I am off the track here. It would be nice to be able to save the Volatile disks. In my use case we need to preserve the disk on which the DB lives for accounting purposes. I don't see a point in reinventing the wheel with a hook. Thanks for your input on this matter. Cheers and Goodwill, Thoughts? Thanks. Cheers and Goodwill, On Fri, May 17, 2013 at 4:47 PM, Carlos Martín Sánchez cmar...@opennebula.org wrote: Hi, I'm glad you found a way to make it work. If it is really needed, we could add support to reference other context attributes. Please open a request If you still think it would be better. Cheers, Carlos -- Join us at OpenNebulaConf2013 http://opennebulaconf.com in Berlin, 24-26 September, 2013 -- Carlos Martín, MSc Project Engineer OpenNebula - The Open-source Solution for Data Center Virtualization www.OpenNebula.org | cmar...@opennebula.org | @OpenNebulahttp://twitter.com/opennebulacmar...@opennebula.org On Thu, May 16, 2013 at 2:14 PM, Valentin Bud valentin@gmail.comwrote: Hello Community, First of all, I apologise I forgot to say hi in my previous E-Mail :|. On Thu, May 16, 2013 at 10:28:14AM +0300, Valentin Bud wrote: I am trying to reference the CONTEXT variables from within CONTEXT. For example, I define the DOMAIN=domain.tld and the HOSTNAME=host. I would like to have a variable FQDN=$HOST.$DOMAIN.. I have tried to achieve the above using the following in CONTEXT section of the template. Case I -- $ onetemplate show vars TEMPLATE CONTENTS CONTEXT=[ DOMAIN=domain.tld, FQDN=$DOMAIN.$HOSTNAME., HOSTNAME=host ] [... output omitted for brevity ...] Instantiating the template results in the following variables added to the VM. $ onevm show vars VIRTUAL MACHINE TEMPLATE CONTEXT=[ DISK_ID=1, DOMAIN=domain.tld, *FQDN=..,* HOSTNAME=host, TARGET=hda ] Case II -- $ onetemplate show vars CONTEXT=[ DOMAIN=domain.tld, FQDN=$CONTEXT[$DOMAIN].$CONTEXT[$HOSTNAME]., HOSTNAME=host ] [... output omitted for brevity ...] Same result in the VM. $ onevm show vars VIRTUAL MACHINE TEMPLATE CONTEXT=[ DISK_ID=1, DOMAIN=domain.tld, *FQDN=..*, HOSTNAME=host, TARGET=hda ] [... output omitted for brevity ...] Is it possible to achieve what am I trying or should I search for a new solution
Re: [one-users] Reference CONTEXT variables within CONTEXT
Hello Carlos, I would gladly open a request if you and others from the community think this is useful. I see this topic as a Request For Comments :-). For me the NAME of the VM would prove useful to be available within CONTEXT. The NAME being the `onetemplate instantiate 8 --name NAME` This way I could have the following template which the users could use: CONTEXT=[ CONTEXT_FILES_LOCATION=$CONTEXT_FILES_LOCATION, DOMAIN=$DOMAIN, FILES=$CONTEXT_FILES_LOCATION/$FQDN/ssh_host_rsa_key $CONTEXT_FILES_LOCATION/$FQDN/ssh_host_rsa_key.pub, FQDN=$HOSTNAME.$DOMAIN., HOSTNAME=$HOSTNAME ] CONTEXT_FILES_LOCATION=/var/lib/cloud/context CPU=1 DISK=[ IMAGE=debian6-stable ] DISK=[ IMAGE=vdb.$HOSTNAME.$DOMAIN ] DOMAIN=domain.tld GRAPHICS=[ LISTEN=0.0.0.0, TYPE=vnc ] ** HOSTNAME=$NAME ** MEMORY=2048 NIC=[ NETWORK=dev.domain.tld ] OS=[ ARCH=x86_64 ] VCPU=4 From Sunstone, at instantiation time you are asked to give the VM a name. That NAME could be used in the template also. Maybe NAME is not the proper way for this variable. The above applies to my use case. In my case each project I run on top of OpenNebula is a (sub)domain. The above would ease the process of instantiating a new VM. Maybe I could even use a hook to create the second image on the fly at CREATE time, though I think OpenNebula would complain that the disk is missing. Haven't tested this yet. Thoughts? Thanks. Cheers and Goodwill, On Fri, May 17, 2013 at 4:47 PM, Carlos Martín Sánchez cmar...@opennebula.org wrote: Hi, I'm glad you found a way to make it work. If it is really needed, we could add support to reference other context attributes. Please open a request If you still think it would be better. Cheers, Carlos -- Join us at OpenNebulaConf2013 http://opennebulaconf.com in Berlin, 24-26 September, 2013 -- Carlos Martín, MSc Project Engineer OpenNebula - The Open-source Solution for Data Center Virtualization www.OpenNebula.org | cmar...@opennebula.org | @OpenNebulahttp://twitter.com/opennebulacmar...@opennebula.org On Thu, May 16, 2013 at 2:14 PM, Valentin Bud valentin@gmail.comwrote: Hello Community, First of all, I apologise I forgot to say hi in my previous E-Mail :|. On Thu, May 16, 2013 at 10:28:14AM +0300, Valentin Bud wrote: I am trying to reference the CONTEXT variables from within CONTEXT. For example, I define the DOMAIN=domain.tld and the HOSTNAME=host. I would like to have a variable FQDN=$HOST.$DOMAIN.. I have tried to achieve the above using the following in CONTEXT section of the template. Case I -- $ onetemplate show vars TEMPLATE CONTENTS CONTEXT=[ DOMAIN=domain.tld, FQDN=$DOMAIN.$HOSTNAME., HOSTNAME=host ] [... output omitted for brevity ...] Instantiating the template results in the following variables added to the VM. $ onevm show vars VIRTUAL MACHINE TEMPLATE CONTEXT=[ DISK_ID=1, DOMAIN=domain.tld, *FQDN=..,* HOSTNAME=host, TARGET=hda ] Case II -- $ onetemplate show vars CONTEXT=[ DOMAIN=domain.tld, FQDN=$CONTEXT[$DOMAIN].$CONTEXT[$HOSTNAME]., HOSTNAME=host ] [... output omitted for brevity ...] Same result in the VM. $ onevm show vars VIRTUAL MACHINE TEMPLATE CONTEXT=[ DISK_ID=1, DOMAIN=domain.tld, *FQDN=..*, HOSTNAME=host, TARGET=hda ] [... output omitted for brevity ...] Is it possible to achieve what am I trying or should I search for a new solution? I have a simple use case. I am generating, via a hook, the ssh keys for the VM in question. At boot I copy the keys from /mnt to /etc/ssh via a crafted one-context script. Awesome mechanism by the way :-). I am generating the keys in CONTEXT_FILES_LOCATION=/var/lib/cloud/context/host.domain.tld. I would like to use the following in the CONTEXT section: CONTEXT=[ CONTEXT_FILES_LOCATION=/var/lib/cloud/context/host.domain.tld./, DOMAIN=domain.tld, FILES=$CONTEXT_FILES_LOCATION/$FQDN/ssh_host_rsa_key ..., HOSTNAME=host, FQDN=$HOSTNAME.$DOMAIN. ] This makes the template much more dynamic. I would just change the HOSTNAME and the paths would get generated dynamically. Writing the HOSTNAME, DOMAIN, CONTEXT_FILES_LOCATION outside the CONTEXT section and referencing them from within CONTEXT works :-). Example -- $ onetemplate show vars TEMPLATE CONTENTS CONTEXT=[ CONTEXT_FILES_LOCATION=$CONTEXT_FILES_LOCATION, DOMAIN=$DOMAIN, FILES=$CONTEXT_FILES_LOCATION/$FQDN/ssh_host_rsa_key $CONTEXT_FILES_LOCATION/$FQDN/ssh_host_rsa_key.pub, FQDN=$HOSTNAME.$DOMAIN., HOSTNAME=$HOSTNAME ] CONTEXT_FILES_LOCATION=/var/lib/cloud/context CPU=1 DISK=[ IMAGE=vars ] DOMAIN=domain.tld GRAPHICS=[ LISTEN=0.0.0.0, TYPE=vnc ] HOSTNAME=host MEMORY=2048 NIC=[ NETWORK=host.domain.tld ] OS=[ ARCH=x86_64 ] VCPU=4 After instantiating the machine I have the desired results. $ onevm show vars VIRTUAL MACHINE
[one-users] Reference CONTEXT variables within CONTEXT
I am trying to reference the CONTEXT variables from within CONTEXT. For example, I define the DOMAIN=domain.tld and the HOSTNAME=host. I would like to have a variable FQDN=$HOST.$DOMAIN.. I have tried to achieve the above using the following in CONTEXT section of the template. Case I -- $ onetemplate show vars TEMPLATE CONTENTS CONTEXT=[ DOMAIN=domain.tld, FQDN=$DOMAIN.$HOSTNAME., HOSTNAME=host ] [... output omitted for brevity ...] Instantiating the template results in the following variables added to the VM. $ onevm show vars VIRTUAL MACHINE TEMPLATE CONTEXT=[ DISK_ID=1, DOMAIN=domain.tld, *FQDN=..,* HOSTNAME=host, TARGET=hda ] Case II -- $ onetemplate show vars CONTEXT=[ DOMAIN=domain.tld, FQDN=$CONTEXT[$DOMAIN].$CONTEXT[$HOSTNAME]., HOSTNAME=host ] [... output omitted for brevity ...] Same result in the VM. $ onevm show vars VIRTUAL MACHINE TEMPLATE CONTEXT=[ DISK_ID=1, DOMAIN=domain.tld, *FQDN=..*, HOSTNAME=host, TARGET=hda ] [... output omitted for brevity ...] Is it possible to achieve what am I trying or should I search for a new solution? I have a simple use case. I am generating, via a hook, the ssh keys for the VM in question. At boot I copy the keys from /mnt to /etc/ssh via a crafted one-context script. Awesome mechanism by the way :-). I am generating the keys in CONTEXT_FILES_LOCATION=/var/lib/cloud/context/host.domain.tld. I would like to use the following in the CONTEXT section: CONTEXT=[ CONTEXT_FILES_LOCATION=/var/lib/cloud/context/host.domain.tld./, DOMAIN=domain.tld, FILES=$CONTEXT_FILES_LOCATION/$FQDN/ssh_host_rsa_key ..., HOSTNAME=host, FQDN=$HOSTNAME.$DOMAIN. ] This makes the template much more dynamic. I would just change the HOSTNAME and the paths would get generated dynamically. Any help is appreciated. Cheers and Goodwill, -- Valentin Bud www.databus.pro | valen...@databus.pro ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Reference CONTEXT variables within CONTEXT
Hello Community, First of all, I apologise I forgot to say hi in my previous E-Mail :|. On Thu, May 16, 2013 at 10:28:14AM +0300, Valentin Bud wrote: I am trying to reference the CONTEXT variables from within CONTEXT. For example, I define the DOMAIN=domain.tld and the HOSTNAME=host. I would like to have a variable FQDN=$HOST.$DOMAIN.. I have tried to achieve the above using the following in CONTEXT section of the template. Case I -- $ onetemplate show vars TEMPLATE CONTENTS CONTEXT=[ DOMAIN=domain.tld, FQDN=$DOMAIN.$HOSTNAME., HOSTNAME=host ] [... output omitted for brevity ...] Instantiating the template results in the following variables added to the VM. $ onevm show vars VIRTUAL MACHINE TEMPLATE CONTEXT=[ DISK_ID=1, DOMAIN=domain.tld, *FQDN=..,* HOSTNAME=host, TARGET=hda ] Case II -- $ onetemplate show vars CONTEXT=[ DOMAIN=domain.tld, FQDN=$CONTEXT[$DOMAIN].$CONTEXT[$HOSTNAME]., HOSTNAME=host ] [... output omitted for brevity ...] Same result in the VM. $ onevm show vars VIRTUAL MACHINE TEMPLATE CONTEXT=[ DISK_ID=1, DOMAIN=domain.tld, *FQDN=..*, HOSTNAME=host, TARGET=hda ] [... output omitted for brevity ...] Is it possible to achieve what am I trying or should I search for a new solution? I have a simple use case. I am generating, via a hook, the ssh keys for the VM in question. At boot I copy the keys from /mnt to /etc/ssh via a crafted one-context script. Awesome mechanism by the way :-). I am generating the keys in CONTEXT_FILES_LOCATION=/var/lib/cloud/context/host.domain.tld. I would like to use the following in the CONTEXT section: CONTEXT=[ CONTEXT_FILES_LOCATION=/var/lib/cloud/context/host.domain.tld./, DOMAIN=domain.tld, FILES=$CONTEXT_FILES_LOCATION/$FQDN/ssh_host_rsa_key ..., HOSTNAME=host, FQDN=$HOSTNAME.$DOMAIN. ] This makes the template much more dynamic. I would just change the HOSTNAME and the paths would get generated dynamically. Writing the HOSTNAME, DOMAIN, CONTEXT_FILES_LOCATION outside the CONTEXT section and referencing them from within CONTEXT works :-). Example -- $ onetemplate show vars TEMPLATE CONTENTS CONTEXT=[ CONTEXT_FILES_LOCATION=$CONTEXT_FILES_LOCATION, DOMAIN=$DOMAIN, FILES=$CONTEXT_FILES_LOCATION/$FQDN/ssh_host_rsa_key $CONTEXT_FILES_LOCATION/$FQDN/ssh_host_rsa_key.pub, FQDN=$HOSTNAME.$DOMAIN., HOSTNAME=$HOSTNAME ] CONTEXT_FILES_LOCATION=/var/lib/cloud/context CPU=1 DISK=[ IMAGE=vars ] DOMAIN=domain.tld GRAPHICS=[ LISTEN=0.0.0.0, TYPE=vnc ] HOSTNAME=host MEMORY=2048 NIC=[ NETWORK=host.domain.tld ] OS=[ ARCH=x86_64 ] VCPU=4 After instantiating the machine I have the desired results. $ onevm show vars VIRTUAL MACHINE TEMPLATE CONTEXT=[ CONTEXT_FILES_LOCATION=/var/lib/cloud/context, DISK_ID=1, DOMAIN=dev.corview.de, FILES=/var/lib/cloud/context//ssh_host_rsa_key /var/lib/cloud/context//ssh_host_rsa_key.pub, FQDN=vars.domain.tld., HOSTNAME=vars, TARGET=hda ] Now I can easily generate SSH host keys for each machine if this wasn't done already in a previous VM instantation. Thanks and sorry for the noise. If this is mentioned somewhere in the docs, my bad. Cheers and Goodwill, -- Valentin Bud http://databus.pro/ | valen...@databus.pro ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] OpenNebula 4.0 is out!
Awesome release! Thank you for all you are doing :-). On Thu, May 9, 2013 at 12:13 AM, Nicolas Bélan nicolas.be...@gmail.comwrote: \o/ That's a great and good news ! thank you all! Le 08/05/2013 18:52, Jon a écrit : This is awesome! Thanks for all the hard work! On May 8, 2013 10:35 AM, Tino Vazquez tin...@opennebula.org wrote: Dear Community, This is the official announcement of OpenNebula 4.0, codename Eagle, five years after our first public release. OpenNebula 4.0 is the result of the terrific feedback of the day-to-day operation of virtualized infrastructures by many of you, result of all your contributions, bug reports, patches, and translations, but one and foremost, OpenNebula 4.0 is the realization of a vision of simplicity, openness, code-correctness and a sysadmin-centric approach. OpenNebula 4.0 includes new features in most of its subsystems. We are showing for the first time a completely redesigned Sunstone, with a fresh and modern look and an updated workflow for most of the dialogs. The also new Sunstone Views functionality allows to customize the GUI for each type of user or group, so the interface implements a different provisioning model for each role. A whole new set of operations for VMs like system and disk snapshoting, capacity re-sizing, programmable VM actions and IPv6 among others. There are some new drivers also, like Ceph; as well as improvements for VMware, KVM and Xen. The scheduler has received some attention from the OpenNebula team to easily define more placement policies...and much more. As usual OpenNebula releases are named after a Nebula. The Eagle Nebula (catalogued as Messier 16 or M16, and as NGC 6611, and also known as the Star Queen Nebula) is a young open cluster of stars in the constellation Serpens, discovered by Jean-Philippe de Cheseaux in 1745-46. It is located about 7,000 light-years away from Earth. And last, but not least, we want to give a huge THANKS to our community, without whom OpenNebula wouldn't be anywhere as near as good as it is today. So, let's fly over the clouds riding the Eagle ;) LINKS * Complete Release Notes: http://www.opennebula.org/software:rnotes:rn-rel4.0 * Download: http://downloads.opennebula.org/ * Documentation: http://opennebula.org/documentation:rel4.0 * Screencasts: http://opennebula.org/documentation:screencasts -- Join us at OpenNebulaConf2013 in Berlin, 24-26 September, 2013 -- Constantino Vázquez Blanco, PhD, MSc Project Engineer | OpenNebula - The Open-Source Solution for Data Center Virtualization Join us at OpenNebulaConf2013 in Berlin from the 24th to the 26th of September 2013! www.OpenNebula.org | @tinova79 | @OpenNebula ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing listUsers@lists.opennebula.orghttp://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- w: http://databus.ro/blog in: http://www.linkedin.com/pub/valentin-bud/9/881/830 t: https://twitter.com/valentinbud ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] How to mount iscsi target in linux container?
Hello Dylan, On Fri, Mar 8, 2013 at 4:09 AM, cmcc.dylan dx10ye...@126.com wrote: Hi! Do you use the one-3.2 branch? I test the basic functions before i commit to github. The LXC driver is written by myself, if you have questions you can tell me and we fix it together! I don't know from *where* to get the LXC driver. From the official OpenNebula repository or do you have your own repository from which I can check it out? If so, could you please give me a link. I would gladly test it and work side by side with you to improve it. Thanks. Cheers and Goodwill, Valentin Bud At 2013-03-07 14:29:51,Valentin Bud valentin@gmail.com wrote: Hello Dylan, I am trying to get OpenNebula 3.2.1 or 3.8.3 working with Linux Containers but I have ran into an issue. I can't find the OneLXC drivers provided by CMRI. Following the OpenNebula blog post [1] about OneLXC I have tried to download them from https://github.com/cmri/opennebula-3.2.1-lxc.git but that gives me a 404. Big Cloud says in the second comment that the repo has been moved to https://github.com/cmri/one.git. I have tried to find the IM_MAD and VMM_MAD in that repo but I wasn't able to. The blog post also mentions that there should be a `src/vmm/LibVirtDriverLXC.cc` source file in the repo. Couldn't find that either. Where did you get the LXC drivers or are you building LXC drivers from scratch? I would like to help on this matter, first by testing them and then to further improve them. I could use your thoughts on this matter. [1]: http://blog.opennebula.org/?p=3850 Thank you. Cheers and Goodwill, Valentin Bud On Tue, Mar 5, 2013 at 11:48 AM, Albert Avellana albertav...@gmail.comwrote: Hi Dylan, I'm Albert, a researcher from UPC university (Barcelona) working in a cloud community project. I've been testing your version of Open Nebula 3.2.1 adapted for LXC and it seems to work well. I'll be glad to help you with the 3.8 version if you are interested, developing some parts or just testing and giving you feedback / reporting bugs. We are really interested in the possibility of fully integrating LXC with Open Nebula so let me know if I we can work together :) best regards, albert On 4 March 2013 15:07, cmcc.dylan dx10ye...@126.com wrote: Hi! I use opennebula-3.2.1 now, but i'm going to use opennebula-3.8. yes, i use ubuntu and have implement the basic funstions,for example,create/delete/suspend/resume a linux container instance. The questions i talked about is that i plan to use shared storage for linux container, such as nfs and iscsi. I think it's very suitable for a private cloud and a development enviroment. At 2013-03-04 16:25:42,Valentin Bud valentin@gmail.com wrote: Hello Dylan, What version of OpenNebula are you using? As far as I understand you are using Ubuntu as your OS and trying to boot up LXC containers on top of that. Am I right? What basic functions are you talking about? Start/stop LXC containers? Could you elaborate a little bit about your setup. I am thinking of using LXC containers for a project also and I am curios about your setup. Thank you. Cheers and Goodwill, Valentin On Mon, Mar 4, 2013 at 9:07 AM, cmcc.dylan dx10ye...@126.com wrote: Hi, everyone. Recently, I'm doing some works about linux container. I choose lxc as the hypervisor in the cloud platform - OpenNebula. The basic funtions are done. I plan to use iscsi storage as shared storage. Beause i choose ubuntu as the container os, i execute command sudo apt-get install open-iscsi open-iscsi-utils. It's failed unfortunately. when i install, it show informations as follows: update-rc.d: warning: open-iscsi stop runlevel arguments (0 1 6) do not match LSB Default-Stop values (0 6) * Starting iSCSI initiator service iscsid [ OK ] * Setting up iSCSI targets [ OK ] when i execute iscsi discovery commad, it proves it is ok and shows informations as follows: ubuntu@lxc:~$ sudo iscsiadm -m discovery -t sendtargets -p 192.168.35.17 192.168.35.17:3260,1 iqn.2013-02.node2 However, when i execute iscsi login command, it's failed and show informations: $ sudo iscsiadm -m node --targetname iqn.2013-02.node2 -p 192.168.35.17 --login Logging in to [iface: default, target: iqn.2013-02.node2, portal: 192.168.35.17,3260] iscsiadm: got read error (0/0), daemon died? iscsiadm: Could not login to [iface: default, target: iqn.2013-02.node2, portal: 192.168.35.17,3260]: iscsiadm: initiator reported error (18 - could not communicate to iscsid Does lxc support iscsi ? ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- w: http://databus.ro/blog in: http://www.linkedin.com/pub/valentin-bud/9/881/830 t: https://twitter.com/valentinbud
Re: [one-users] How to mount iscsi target in linux container?
Hello Dylan, I am trying to get OpenNebula 3.2.1 or 3.8.3 working with Linux Containers but I have ran into an issue. I can't find the OneLXC drivers provided by CMRI. Following the OpenNebula blog post [1] about OneLXC I have tried to download them from https://github.com/cmri/opennebula-3.2.1-lxc.git but that gives me a 404. Big Cloud says in the second comment that the repo has been moved to https://github.com/cmri/one.git. I have tried to find the IM_MAD and VMM_MAD in that repo but I wasn't able to. The blog post also mentions that there should be a `src/vmm/LibVirtDriverLXC.cc` source file in the repo. Couldn't find that either. Where did you get the LXC drivers or are you building LXC drivers from scratch? I would like to help on this matter, first by testing them and then to further improve them. I could use your thoughts on this matter. [1]: http://blog.opennebula.org/?p=3850 Thank you. Cheers and Goodwill, Valentin Bud On Tue, Mar 5, 2013 at 11:48 AM, Albert Avellana albertav...@gmail.comwrote: Hi Dylan, I'm Albert, a researcher from UPC university (Barcelona) working in a cloud community project. I've been testing your version of Open Nebula 3.2.1 adapted for LXC and it seems to work well. I'll be glad to help you with the 3.8 version if you are interested, developing some parts or just testing and giving you feedback / reporting bugs. We are really interested in the possibility of fully integrating LXC with Open Nebula so let me know if I we can work together :) best regards, albert On 4 March 2013 15:07, cmcc.dylan dx10ye...@126.com wrote: Hi! I use opennebula-3.2.1 now, but i'm going to use opennebula-3.8. yes, i use ubuntu and have implement the basic funstions,for example,create/delete/suspend/resume a linux container instance. The questions i talked about is that i plan to use shared storage for linux container, such as nfs and iscsi. I think it's very suitable for a private cloud and a development enviroment. At 2013-03-04 16:25:42,Valentin Bud valentin@gmail.com wrote: Hello Dylan, What version of OpenNebula are you using? As far as I understand you are using Ubuntu as your OS and trying to boot up LXC containers on top of that. Am I right? What basic functions are you talking about? Start/stop LXC containers? Could you elaborate a little bit about your setup. I am thinking of using LXC containers for a project also and I am curios about your setup. Thank you. Cheers and Goodwill, Valentin On Mon, Mar 4, 2013 at 9:07 AM, cmcc.dylan dx10ye...@126.com wrote: Hi, everyone. Recently, I'm doing some works about linux container. I choose lxc as the hypervisor in the cloud platform - OpenNebula. The basic funtions are done. I plan to use iscsi storage as shared storage. Beause i choose ubuntu as the container os, i execute command sudo apt-get install open-iscsi open-iscsi-utils. It's failed unfortunately. when i install, it show informations as follows: update-rc.d: warning: open-iscsi stop runlevel arguments (0 1 6) do not match LSB Default-Stop values (0 6) * Starting iSCSI initiator service iscsid [ OK ] * Setting up iSCSI targets [ OK ] when i execute iscsi discovery commad, it proves it is ok and shows informations as follows: ubuntu@lxc:~$ sudo iscsiadm -m discovery -t sendtargets -p 192.168.35.17 192.168.35.17:3260,1 iqn.2013-02.node2 However, when i execute iscsi login command, it's failed and show informations: $ sudo iscsiadm -m node --targetname iqn.2013-02.node2 -p 192.168.35.17 --login Logging in to [iface: default, target: iqn.2013-02.node2, portal: 192.168.35.17,3260] iscsiadm: got read error (0/0), daemon died? iscsiadm: Could not login to [iface: default, target: iqn.2013-02.node2, portal: 192.168.35.17,3260]: iscsiadm: initiator reported error (18 - could not communicate to iscsid Does lxc support iscsi ? ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- w: http://databus.ro/blog in: http://www.linkedin.com/pub/valentin-bud/9/881/830 t: https://twitter.com/valentinbud ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Albert Avellana * avell...@ac.upc.edu* ** ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- w: http://databus.ro/blog in: http://www.linkedin.com/pub/valentin-bud/9/881/830 t: https://twitter.com/valentinbud ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] vmcontext in one-context never executes init.sh from context iso
On Tue, Jan 22, 2013 at 1:48 PM, Rolandas Naujikas rolandas.nauji...@mif.vu.lt wrote: Hi, Because all files in context iso are not executable, then /mnt/init.sh line is never executed in /etc/init.d/vmcontext. Regards, Rolandas Naujikas __**_ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/**listinfo.cgi/users-opennebula.**orghttp://lists.opennebula.org/listinfo.cgi/users-opennebula.org Hello World, A fix could be to run the script using bash /mnt/init.sh. Do you people think that could lead to problems? As far as I know Bash is present in all Linux distribution OpenNebula runs on. Regards, Valentin Bud ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] [Help] Opennebula
On Thu, Jan 17, 2013 at 09:16:19AM +0700, Dimas Alif wrote: helllo, Diploma thesis I'm working on Implementation of OpenNebula In Local Area Network, I did use ubuntu server .. how to connect ubuntu server with a web browser to access OpenNebula? and whether there are teachers who can teach me? mainly from Indonesia .. thanks Hello Dimas, I would like to ask you what version of OpenNebula are you using? I guess that you are using 3.8. The links that I am going to provide that will help achieve your goal are for 3.8. OpenNebula has a GUI called Sunstone [1]. The Sunstone documentation will help you to get it up and running in no time. Follows the docs and if you have any kind of problems just come back to the list and ask. There are a lot of skilled people that can help you :-). And by the way, I think it is really cool that you are doing your thesis on cloud technology. I wish you good luck and a nice journey towards achieving your goal. You'll learn a lot of usefull stuff along the way. [1]: http://opennebula.org/documentation:rel3.8:sunstone Cheers and Goodwill, v ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Where to get ONE 2.2
Hello, I don't know where you can download the source directly from but you can clone the official code repository and checkout release 2.2. ``` $ git clone git://git.opennebula.org/one.git $ cd one $ git checkout -b one-2.2 origin/release-2.2 ``` Cheers and Goodwill, Valentin On Thu, Jan 3, 2013 at 12:33 AM, Naveed Abbas naveed.ab...@yahoo.comwrote: Hi Can any one tell from where I can download ONE 2.2 source tarball. It is not available on opennebula download page. Syed Naveed Abbas Rizvi ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Ganglia integration
Hello World, On Friday, December 14, 2012, Olivier Sallou wrote: Le 12/14/12 9:32 AM, Duverne, Cyrille a écrit : Hello Olivier, Thanks for this feedback. My request wasn't clear enough, I already monitor my hosts and I see the OPENNEBULA-VMS-INFORMATION variable in Ganglia, but I don't know how to use it to provide graphs on the VM usage. Will the VM information be displayed in Sunstone ? In Ganglia ? OPENNEBULA-VMS-INFORMATION provides global VM status information to opennebula. Some graphs are available in Sunstone to monitor the VM help with this data (when you click on your vm in suntone) If you want Ganglia monitoring of your VM, you have to setup gmond in the VM itself. Or one can use Host sFlow [1] to gather CPU, RAM and network statistics. There is a blog post [2] describing the installation procedure. Works with Ganglia 3.2+. If you need other metrics you'd have to install gmond inside the VM. Enjoy. [1]: http://host-sflow.sourceforge.net/ [2]: http://blog.sflow.com/2012/01/using-ganglia-to-monitor-virtual.html v Olivier Any help is welcome on this. Cheers Cyrille Jeudi 13/12/2012 à 17:31 Olivier Sallou a écrit: Le 12/13/12 3:27 PM, Duverne, Cyrille a écrit : Dear mailing-list, (these days it's like saying dear Santa, [image: :)]) I've set up a ganglia environment in my lab and can see the monitoring information for my hosts. I've followed the manual about ganglia integration and updated the nodes, updated oned.conf, created the cron as follow : */1 * * * * oneadmin gmetric -n OPENNEBULA_VMS_INFORMATION -t string -v `/var/lib/one/remotes/vmm/kvm/poll --kvm` I've updated the probe file to point to my central ganglia server. But I can't see anything on Ganglia web. I might be missing something on the ganglia server side... but what ? Where are the VM information defines for Ganglia to display them ? Ganglia does not provide graphics for the VMs with opennebula. VM information are stored in a variable accessible for each host with name OPENNEBULA_VMS_INFORMATION (base64 encoded). Then opennebula will trigger ganglia to get the status of the VMs running on this host. OpenNebula uses Ganglia to extract the information, and not the contrary If you want to monitor your host with Ganglia you can use standard metrics provided by gmetric (cpu,ram,...) Olivier Thanks in advance for your feedback Cheers Cyrille ___ Users mailing listUsers@lists.opennebula.orghttp://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Olivier Sallou IRISA / University of Rennes 1 Campus de Beaulieu, 35000 RENNES - FRANCE Tel: 02.99.84.71.95 gpg key id: 4096R/326D8438 (keyring.debian.org) Key fingerprint = 5FB4 6F83 D3B9 5204 6335 D26D 78DC 68DB 326D 8438 -- Olivier Sallou IRISA / University of Rennes 1 Campus de Beaulieu, 35000 RENNES - FRANCE Tel: 02.99.84.71.95 gpg key id: 4096R/326D8438 (keyring.debian.org) Key fingerprint = 5FB4 6F83 D3B9 5204 6335 D26D 78DC 68DB 326D 8438 ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] How to detect a host of free disk??????
Hello World, On Fri, Nov 30, 2012 at 10:58:54AM +0800, ?? wrote: Hi??all?? Can opennebula have the ability to detect the size of the host node idle disk?? I always encounter this problem?? The new virtual machine is scheduled to a node and then the vm is failed.Because the node has no space left. So, I think the free size of the disk space should be take into account. I think this would be a usefull feature to have. +1 Cheers and Goodwill, v ?? tel: 13718913184 mail: zhan...@neusoft.com http://www.neusoft.com --- Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) is intended only for the use of the intended recipient and may be confidential and/or privileged of Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying is strictly prohibited, and may be unlawful.If you have received this communication in error,please immediately notify the sender by return e-mail, and delete the original message and all copies from your system. Thank you. --- ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Sunstone Error initializing authentication system
Hi Daniel, On Tue, Nov 20, 2012 at 10:31:22AM +0100, Daniel Molina wrote: Hi Valentin, On 20 November 2012 08:36, Valentin Bud valentin@gmail.com wrote: Hi Ruben, Thanks for your time. I followed the proposed solution but the result is the same. As `oneadmin` ``` $ oneuser show 1 | grep PASS PASSWORD : afc0f1457b5480afd548d5a09e14171bab315d2c $ oneuser passwd 1 1234 --sha1 $ oneuser show 1 | grep PASS PASSWORD : 7110eda4d09e062aa5e4a390b0a572ac0d2c0220 $ echo serveradmin:1234 ~/.one/sunstone_auth You have to update the file in /var/lib/one (If you installed system-wide) $ echo serveradmin:1234 /var/lib/one/.one/sunstone_auth That was the file, `oneadmin` user has `/var/lib/one` as $HOME (~). I should have posted using full path not ~ shortcut. ``` oneadmin@:~$ cat /var/lib/one/.one/sunstone_auth serveradmin:1234 ``` Yes the installation is done system-wide from 3.8.1 sources. Thank you. Cheers and Goodwill, v Cheers $ cat ~/.one/sunstone_auth serveradmin:1234 $ sunstone-server start Stale .lock detected. Erasing it. Error executing sunstone-server. Check /var/log/one/sunstone.error and /var/log/one/sunstone.log for more information ``` `/var/log/one/sunstone.log` ``` -- Server configuration -- {:vnc_proxy_support_wss=false, :vnc_proxy_cert=nil, :one_xmlrpc=http://localhost:2633/RPC2;, :marketplace_url=https://marketplace.c12g.com/appliance;, :vnc_proxy_key=nil, :debug_level=3, :vnc_proxy_path=/usr/share/one/websockify/websocketproxy.py, :core_auth=cipher, :host=127.0.0.1, :lang=en_US, :vnc_proxy_port=29876, :auth=sunstone, :port=9869, :tmpdir=/var/tmp} Tue Nov 20 08:32:16 2012 [E]: Error initializing authentication system Tue Nov 20 08:32:16 2012 [E]: [UserPoolInfo] User couldn't be authenticated, aborting call. ``` `/var/log/one/sunstone.error` is empty. Some guidance on how to debug further would be useful. Thank you. Cheers and Goodwill, v On Mon, Nov 19, 2012 at 10:37:02PM +0100, Ruben S. Montero wrote: Hi It is in plain password, try the following to recreate the serveradmin passwd and sunstone credentials: 1.- oneuser passwd 1 1234 --sha1 2.- echo serveradmin:1234 /var/lib/one/.one/sunstone_auth 3.- sunstone-server start Cheers Ruben On Mon, Nov 19, 2012 at 2:58 PM, Valentin Bud valentin@gmail.com wrote: Hello World, I have updated today from 3.6 to 3.8.1 from source on a Debian Squeeze machine. I didn't need nor want Sunstone until now. So I have followed the Sunstone documentation [1] to install and configure it. As `oneadmin` user when I try to start Sunstone I get the following the logs: ``` /var/log/one/sunstone.log -- Server configuration -- {:vnc_proxy_support_wss=false, :vnc_proxy_cert=nil, :one_xmlrpc=http://localhost:2633/RPC2;, :marketplace_url=https://marketplace.c12g.com/appliance;, :vnc_proxy_key=nil, :debug_level=3, :vnc_proxy_path=/usr/share/one/websockify/websocketproxy.py, :core_auth=cipher, :host=127.0.0.1, :lang=en_US, :vnc_proxy_port=29876, :auth=sunstone, :port=9869, :tmpdir=/var/tmp} Mon Nov 19 14:41:21 2012 [E]: Error initializing authentication system Mon Nov 19 14:41:21 2012 [E]: No such file or directory - /var/lib/one/.one/sunstone_auth ``` Indeed the file in missing. ``` oneadmin@frontend:~$ ls -al /var/lib/one/.one/sunstone_auth ls: cannot access /var/lib/one/.one/sunstone_auth: No such file or directory ``` It was missing even before I have updated to 3.8.1. I have created the file with the following contents: ``` /var/lib/one/.one/sunstone_auth serveradmin:af84cc76ff2f6bbede661a62f4932d739f0e1fb0 ``` The password part is the hashed serveradmin's key as shown by `oneuser show`. ``` $ oneuser show serveradmin | grep PASS PASSWORD : af84cc76ff2f6bbede661a62f4932d739f0e1fb0 ``` Trying to start the server again I receive the same error a little bit different: ``` ... Mon Nov 19 14:53:53 2012 [E]: Error initializing authentication system Mon Nov 19 14:53:53 2012 [E]: [UserPoolInfo] User couldn't be authenticated, aborting call. ``` I didn't know if $HOME/.one/sunstone_auth should list the hashed password or the clear text one, so I've given it one more try and set up the password in clear text. Same output as the one above. If it matters here goes the content of $HOME/.one directory: ``` oneadmin@frontend:~/.one$ ls -1 $HOME/.one one_auth sunstone_auth
Re: [one-users] OpenNebula 3.8.1 ovswitch ovs-ofctl bad syntax for in_port
Hi Ruben, On Tue, Nov 13, 2012 at 11:05:06AM +0100, Ruben S. Montero wrote: Hi The 3.8 version of the Openvswitch drivers use openflows as there are some incompatibility issues when using iptables and ovswitch. Note that there are some filtering limitations (compared to the iptables) regarding the definition of TCP/UDP port ranges. Mon Nov 12 15:17:44 2012 [VMM][D]: Message received: LOG E 216 post: Command sudo /usr/bin/ovs-ofctl add-flow vlan5 in_port=,dl_src=02:00:0a:80:05:32,priority=4,actions=normal failed. This may be some kind of incompatibility of the ovswitch version of your installation and the drivers. The problem here is the empty in_port. Can you deploy the VM by hand and send the output of (executed as oneadmin): sudo /usr/bin/ovs-ofctl dump-ports vlan5 VM_tap_interface I receive the following message: ``` $ sudo /usr/bin/ovs-ofctl dump-ports vlan5 vnet0 ovs-ofctl: vlan5 is not a bridge or a socket ``` OpenNebula runs on a CentOS 6.3 host and is built with Ceph patches. OpenvSwitch is at version 1.8.0. ``` $ uname -a Linux andreea.xxx.com 2.6.32-279.9.1.el6.x86_64 #1 SMP Tue Sep 25 21:43:11 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux $ ovs-vsctl --version ovs-vsctl (Open vSwitch) 1.8.0 Compiled Sep 3 2012 23:10:42 ``` I've also posted a question to OpenvSwitch Discuss [1], because I think it's related. The vlan5 port on this particular switch is a `fake bridge`. Linux bridge compatibility layer is enabled as requested by the docs [1]. ``` $ ps aux | grep openvswitch All seems ok here The OpenFlow default rules employed by OpenNebula state on the doc [2] page the following: `These rules prevent any traffic to come out of the port the MAC address has changed.` Does this mean that traffic coming out of the port OpenNebula just created on the switch is denied/dropped? Or does it mean that traffic with the source MAC address of the VM should only come in on the specified port, the newly added port for the VM? The later, any traffic with source MAC address different from one assigned to the VM is filtered out. To prevent a user to change the VM MAC from the guest... This is one thing you'd want to keep around. Allowing a user to change it's MAC address has some security implications. I think this is a safe measure against MITM attacks. This way the attacker cannot advertise a bogus MAC address. Maybe I'm wrong, but at a glance flows can improve security and thus are a huge improvement to OpenNebula. Do you, devs and users, think that it would be better to separate the flow definition from code? For now if we want to add flow rules we have to modify `/var/lib/one/remotes/vnm/ovswitch/OpenvSwitch.rb`. I think it would be wise to have separate files with rules based on the users' need. `ovs-ofctl` can add flows directly from a file so it wouldn't be very hard, I guess, to separate the flows from the code. Of course one can do this by himself by modifying `post`. Yes, this may be a good idea. Use a filter file with some template engine (ERB, haml...) Where would you store this files in the current directory layout of OpenNebula? I would take this as an opportunity to learn a little bit of Ruby and try to implement the above feature. Can you please tell me where should I start? I think this is a very good idea. Do you think it would be a good ideea to store the files as JSON, so automation tools can drop a file in a `conf.d` like directory, at VM boot? One more thing, why does OpenNebula need the Linux bridge compatibility layer enabled? This is basically a requirement from the hypervisor that uses brctl addif... KVM (through libvirt) supports Openvswitch without compatibility layer since version 0.9.11. We opted to preserve this requirement and remove it in future versions (and add virtualport for NICs). Note that the driver itself does not require the compatibility layer, as it uses openvswitch commands. If I understand correctly I can disable bridge compatibility. Do you think it would be wise to stay current with latest libvirt Official Releases? Thanks for helping. [1]: http://openvswitch.org/pipermail/discuss/2012-November/008422.html Cheers and Goodwill, v ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Sunstone Error initializing authentication system
Hi Daniel, Yes, it's there. ``` /etc/one/oned.conf ... AUTH_MAD = [ executable = one_auth_mad, authn = ssh,x509,ldap,server_cipher,server_x509 ] ... ``` Thank you. On Tue, Nov 20, 2012 at 10:47:31AM +0100, Daniel Molina wrote: On 20 November 2012 10:41, Valentin Bud valentin@gmail.com wrote: Hi Daniel, On Tue, Nov 20, 2012 at 10:31:22AM +0100, Daniel Molina wrote: Hi Valentin, On 20 November 2012 08:36, Valentin Bud valentin@gmail.com wrote: Hi Ruben, Thanks for your time. I followed the proposed solution but the result is the same. As `oneadmin` ``` $ oneuser show 1 | grep PASS PASSWORD : afc0f1457b5480afd548d5a09e14171bab315d2c $ oneuser passwd 1 1234 --sha1 $ oneuser show 1 | grep PASS PASSWORD : 7110eda4d09e062aa5e4a390b0a572ac0d2c0220 $ echo serveradmin:1234 ~/.one/sunstone_auth You have to update the file in /var/lib/one (If you installed system-wide) $ echo serveradmin:1234 /var/lib/one/.one/sunstone_auth That was the file, `oneadmin` user has `/var/lib/one` as $HOME (~). I should have posted using full path not ~ shortcut. ``` oneadmin@:~$ cat /var/lib/one/.one/sunstone_auth serveradmin:1234 ``` Yes the installation is done system-wide from 3.8.1 sources. Could you check in your oned.conf, if the AUTH_MAD section exists. It should look like this: AUTH_MAD = [ executable = one_auth_mad, authn = ssh,x509,ldap,server_cipher,server_x509 ] Thank you. Cheers and Goodwill, v Cheers $ cat ~/.one/sunstone_auth serveradmin:1234 $ sunstone-server start Stale .lock detected. Erasing it. Error executing sunstone-server. Check /var/log/one/sunstone.error and /var/log/one/sunstone.log for more information ``` `/var/log/one/sunstone.log` ``` -- Server configuration -- {:vnc_proxy_support_wss=false, :vnc_proxy_cert=nil, :one_xmlrpc=http://localhost:2633/RPC2;, :marketplace_url=https://marketplace.c12g.com/appliance;, :vnc_proxy_key=nil, :debug_level=3, :vnc_proxy_path=/usr/share/one/websockify/websocketproxy.py, :core_auth=cipher, :host=127.0.0.1, :lang=en_US, :vnc_proxy_port=29876, :auth=sunstone, :port=9869, :tmpdir=/var/tmp} Tue Nov 20 08:32:16 2012 [E]: Error initializing authentication system Tue Nov 20 08:32:16 2012 [E]: [UserPoolInfo] User couldn't be authenticated, aborting call. ``` `/var/log/one/sunstone.error` is empty. Some guidance on how to debug further would be useful. Thank you. Cheers and Goodwill, v On Mon, Nov 19, 2012 at 10:37:02PM +0100, Ruben S. Montero wrote: Hi It is in plain password, try the following to recreate the serveradmin passwd and sunstone credentials: 1.- oneuser passwd 1 1234 --sha1 2.- echo serveradmin:1234 /var/lib/one/.one/sunstone_auth 3.- sunstone-server start Cheers Ruben On Mon, Nov 19, 2012 at 2:58 PM, Valentin Bud valentin@gmail.com wrote: Hello World, I have updated today from 3.6 to 3.8.1 from source on a Debian Squeeze machine. I didn't need nor want Sunstone until now. So I have followed the Sunstone documentation [1] to install and configure it. As `oneadmin` user when I try to start Sunstone I get the following the logs: ``` /var/log/one/sunstone.log -- Server configuration -- {:vnc_proxy_support_wss=false, :vnc_proxy_cert=nil, :one_xmlrpc=http://localhost:2633/RPC2;, :marketplace_url=https://marketplace.c12g.com/appliance;, :vnc_proxy_key=nil, :debug_level=3, :vnc_proxy_path=/usr/share/one/websockify/websocketproxy.py, :core_auth=cipher, :host=127.0.0.1, :lang=en_US, :vnc_proxy_port=29876, :auth=sunstone, :port=9869, :tmpdir=/var/tmp} Mon Nov 19 14:41:21 2012 [E]: Error initializing authentication system Mon Nov 19 14:41:21 2012 [E]: No such file or directory - /var/lib/one/.one/sunstone_auth ``` Indeed the file in missing. ``` oneadmin@frontend:~$ ls -al /var/lib/one/.one/sunstone_auth ls: cannot access /var/lib/one/.one/sunstone_auth: No such file or directory ``` It was missing even before I have updated to 3.8.1. I have created the file with the following contents: ``` /var/lib/one/.one/sunstone_auth
Re: [one-users] Sunstone Error initializing authentication system
On Tue, Nov 20, 2012 at 11:36:11AM +0100, Daniel Molina wrote: On 20 November 2012 11:04, Valentin Bud valentin@gmail.com wrote: Hi Daniel, Yes, it's there. ``` /etc/one/oned.conf ... AUTH_MAD = [ executable = one_auth_mad, authn = ssh,x509,ldap,server_cipher,server_x509 ] ... ``` Could you check in your oned.log if it is loaded: Mon Nov 19 16:35:46 2012 [AuM][I]: Loading Auth. Manager driver. Mon Nov 19 16:35:46 2012 [AuM][I]: Auth Manager loaded And check the error in oned.log when the sunstone-server is started. (There I have restarted `one` and yes the Auth. Manager is loaded. `/var/log/one/oned.log` ``` Tue Nov 20 11:43:14 2012 [AuM][I]: Loading Auth. Manager driver. Tue Nov 20 11:43:14 2012 [AuM][I]: Auth Manager loaded ``` Issued `sunstone-server start` as `oneadmin` and the logs show: `/var/log/one/oned.log` ``` Tue Nov 20 11:45:25 2012 [AuM][D]: Message received: AUTHENTICATE FAILURE 0 Authentication driver 'server_core' not available Tue Nov 20 11:45:25 2012 [AuM][E]: Auth Error: Authentication driver 'server_core' not available Tue Nov 20 11:45:25 2012 [ReM][D]: Req:3952 UID:- UserPoolInfo invoked Tue Nov 20 11:45:25 2012 [ReM][E]: Req:3952 UID:- UserPoolInfo result FAILURE [UserPoolInfo] User couldn't be authenticated, aborting call. ``` No `server_core` Authentication driver. I have changed `/etc/one/oned.conf` to include it in `AUTH_MAD`. Again as `oneadmin` issue `sunstone-server start`. Now the `oned.log` shows something different: `/var/log/one/oned.log` ``` Tue Nov 20 11:47:52 2012 [AuM][E]: Auth Error: Tue Nov 20 11:47:52 2012 [ReM][D]: Req:8512 UID:- UserPoolInfo invoked Tue Nov 20 11:47:52 2012 [ReM][E]: Req:8512 UID:- UserPoolInfo result FAILURE [UserPoolInfo] User couldn't be authenticated, aborting call. ``` Found out what was wrong. User `serveradmin` had `server_core` setup as `AUTH_DRIVER`. Changed it to `server_cipher` and it works now, Sinatra has taken the stage. Thanks you all for time and help. Cheers and Goodwill, v Thank you. On Tue, Nov 20, 2012 at 10:47:31AM +0100, Daniel Molina wrote: On 20 November 2012 10:41, Valentin Bud valentin@gmail.com wrote: Hi Daniel, On Tue, Nov 20, 2012 at 10:31:22AM +0100, Daniel Molina wrote: Hi Valentin, On 20 November 2012 08:36, Valentin Bud valentin@gmail.com wrote: Hi Ruben, Thanks for your time. I followed the proposed solution but the result is the same. As `oneadmin` ``` $ oneuser show 1 | grep PASS PASSWORD : afc0f1457b5480afd548d5a09e14171bab315d2c $ oneuser passwd 1 1234 --sha1 $ oneuser show 1 | grep PASS PASSWORD : 7110eda4d09e062aa5e4a390b0a572ac0d2c0220 $ echo serveradmin:1234 ~/.one/sunstone_auth You have to update the file in /var/lib/one (If you installed system-wide) $ echo serveradmin:1234 /var/lib/one/.one/sunstone_auth That was the file, `oneadmin` user has `/var/lib/one` as $HOME (~). I should have posted using full path not ~ shortcut. ``` oneadmin@:~$ cat /var/lib/one/.one/sunstone_auth serveradmin:1234 ``` Yes the installation is done system-wide from 3.8.1 sources. Could you check in your oned.conf, if the AUTH_MAD section exists. It should look like this: AUTH_MAD = [ executable = one_auth_mad, authn = ssh,x509,ldap,server_cipher,server_x509 ] Thank you. Cheers and Goodwill, v Cheers $ cat ~/.one/sunstone_auth serveradmin:1234 $ sunstone-server start Stale .lock detected. Erasing it. Error executing sunstone-server. Check /var/log/one/sunstone.error and /var/log/one/sunstone.log for more information ``` `/var/log/one/sunstone.log` ``` -- Server configuration -- {:vnc_proxy_support_wss=false, :vnc_proxy_cert=nil, :one_xmlrpc=http://localhost:2633/RPC2;, :marketplace_url=https://marketplace.c12g.com/appliance;, :vnc_proxy_key=nil, :debug_level=3, :vnc_proxy_path=/usr/share/one/websockify/websocketproxy.py, :core_auth=cipher, :host=127.0.0.1, :lang=en_US, :vnc_proxy_port=29876, :auth=sunstone, :port=9869, :tmpdir=/var/tmp} Tue Nov 20 08:32:16 2012 [E]: Error initializing authentication system Tue Nov 20 08:32:16 2012 [E]: [UserPoolInfo] User couldn't be authenticated, aborting call. ``` `/var/log/one/sunstone.error` is empty. Some guidance on how to debug further would be useful. Thank you. Cheers and Goodwill, v On Mon, Nov 19, 2012 at 10:37:02PM +0100, Ruben S. Montero wrote: Hi
Re: [one-users] OpenNebula 3.8.1 ovswitch ovs-ofctl bad syntax for in_port
Hello World, Sorry for the excess emails. My email client went crazy. I will pay much more attention in the future. Back to OpenNebula and OpenvSwitch drivers Mr. Ben Pfaff put it nicely It is important to understand that a fake bridge is neither an OpenvSwitch bridge nor an OpenFlow switch. Thus, you cannot use them in most contexts where an Open vSwitch bridge or an OpenFlow switch is required. I am surprised that the word fake in the name does not make this clear. Using the bridge the fake bridge is part of, it works: ``` $ sudo /usr/bin/ovs-ofctl dump-ports br0 vnet0 OFPST_PORT reply (xid=0x1): 1 ports port 17: rx pkts=680611, bytes=204556006, drop=0, errs=0, frame=0, over=0, crc=0 tx pkts=675160, bytes=352172599, drop=0, errs=0, coll=0 I am going to keep current with libvirt and drop the bridge compatibility layer out of the ecuation. That's the reason I have fake bridges to begin with. Thanks. Cheers and Goodwill On Tue, Nov 20, 2012 at 12:01 PM, Valentin Bud valentin@gmail.comwrote: Hi Ruben, On Tue, Nov 13, 2012 at 11:05:06AM +0100, Ruben S. Montero wrote: Hi The 3.8 version of the Openvswitch drivers use openflows as there are some incompatibility issues when using iptables and ovswitch. Note that there are some filtering limitations (compared to the iptables) regarding the definition of TCP/UDP port ranges. Mon Nov 12 15:17:44 2012 [VMM][D]: Message received: LOG E 216 post: Command sudo /usr/bin/ovs-ofctl add-flow vlan5 in_port=,dl_src=02:00:0a:80:05:32,priority=4,actions=normal failed. This may be some kind of incompatibility of the ovswitch version of your installation and the drivers. The problem here is the empty in_port. Can you deploy the VM by hand and send the output of (executed as oneadmin): sudo /usr/bin/ovs-ofctl dump-ports vlan5 VM_tap_interface I receive the following message: ``` $ sudo /usr/bin/ovs-ofctl dump-ports vlan5 vnet0 ovs-ofctl: vlan5 is not a bridge or a socket ``` OpenNebula runs on a CentOS 6.3 host and is built with Ceph patches. OpenvSwitch is at version 1.8.0. ``` $ uname -a Linux andreea.xxx.com 2.6.32-279.9.1.el6.x86_64 #1 SMP Tue Sep 25 21:43:11 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux $ ovs-vsctl --version ovs-vsctl (Open vSwitch) 1.8.0 Compiled Sep 3 2012 23:10:42 ``` I've also posted a question to OpenvSwitch Discuss [1], because I think it's related. The vlan5 port on this particular switch is a `fake bridge`. Linux bridge compatibility layer is enabled as requested by the docs [1]. ``` $ ps aux | grep openvswitch All seems ok here The OpenFlow default rules employed by OpenNebula state on the doc [2] page the following: `These rules prevent any traffic to come out of the port the MAC address has changed.` Does this mean that traffic coming out of the port OpenNebula just created on the switch is denied/dropped? Or does it mean that traffic with the source MAC address of the VM should only come in on the specified port, the newly added port for the VM? The later, any traffic with source MAC address different from one assigned to the VM is filtered out. To prevent a user to change the VM MAC from the guest... This is one thing you'd want to keep around. Allowing a user to change it's MAC address has some security implications. I think this is a safe measure against MITM attacks. This way the attacker cannot advertise a bogus MAC address. Maybe I'm wrong, but at a glance flows can improve security and thus are a huge improvement to OpenNebula. Do you, devs and users, think that it would be better to separate the flow definition from code? For now if we want to add flow rules we have to modify `/var/lib/one/remotes/vnm/ovswitch/OpenvSwitch.rb`. I think it would be wise to have separate files with rules based on the users' need. `ovs-ofctl` can add flows directly from a file so it wouldn't be very hard, I guess, to separate the flows from the code. Of course one can do this by himself by modifying `post`. Yes, this may be a good idea. Use a filter file with some template engine (ERB, haml...) Where would you store this files in the current directory layout of OpenNebula? I would take this as an opportunity to learn a little bit of Ruby and try to implement the above feature. Can you please tell me where should I start? I think this is a very good idea. Do you think it would be a good ideea to store the files as JSON, so automation tools can drop a file in a `conf.d` like directory, at VM boot? One more thing, why does OpenNebula need the Linux bridge compatibility layer enabled? This is basically a requirement from the hypervisor that uses brctl addif... KVM (through libvirt) supports Openvswitch without compatibility layer since version 0.9.11. We opted to preserve
[one-users] Sunstone Error initializing authentication system
Hello World, I have updated today from 3.6 to 3.8.1 from source on a Debian Squeeze machine. I didn't need nor want Sunstone until now. So I have followed the Sunstone documentation [1] to install and configure it. As `oneadmin` user when I try to start Sunstone I get the following the logs: ``` /var/log/one/sunstone.log -- Server configuration -- {:vnc_proxy_support_wss=false, :vnc_proxy_cert=nil, :one_xmlrpc=http://localhost:2633/RPC2;, :marketplace_url=https://marketplace.c12g.com/appliance;, :vnc_proxy_key=nil, :debug_level=3, :vnc_proxy_path=/usr/share/one/websockify/websocketproxy.py, :core_auth=cipher, :host=127.0.0.1, :lang=en_US, :vnc_proxy_port=29876, :auth=sunstone, :port=9869, :tmpdir=/var/tmp} Mon Nov 19 14:41:21 2012 [E]: Error initializing authentication system Mon Nov 19 14:41:21 2012 [E]: No such file or directory - /var/lib/one/.one/sunstone_auth ``` Indeed the file in missing. ``` oneadmin@frontend:~$ ls -al /var/lib/one/.one/sunstone_auth ls: cannot access /var/lib/one/.one/sunstone_auth: No such file or directory ``` It was missing even before I have updated to 3.8.1. I have created the file with the following contents: ``` /var/lib/one/.one/sunstone_auth serveradmin:af84cc76ff2f6bbede661a62f4932d739f0e1fb0 ``` The password part is the hashed serveradmin's key as shown by `oneuser show`. ``` $ oneuser show serveradmin | grep PASS PASSWORD : af84cc76ff2f6bbede661a62f4932d739f0e1fb0 ``` Trying to start the server again I receive the same error a little bit different: ``` ... Mon Nov 19 14:53:53 2012 [E]: Error initializing authentication system Mon Nov 19 14:53:53 2012 [E]: [UserPoolInfo] User couldn't be authenticated, aborting call. ``` I didn't know if $HOME/.one/sunstone_auth should list the hashed password or the clear text one, so I've given it one more try and set up the password in clear text. Same output as the one above. If it matters here goes the content of $HOME/.one directory: ``` oneadmin@frontend:~/.one$ ls -1 $HOME/.one one_auth sunstone_auth ``` [1]: http://opennebula.org/documentation:rel3.8:sunstone Any hints? Thank you. v ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Sunstone Error initializing authentication system
Hi Ruben, Thanks for your time. I followed the proposed solution but the result is the same. As `oneadmin` ``` $ oneuser show 1 | grep PASS PASSWORD : afc0f1457b5480afd548d5a09e14171bab315d2c $ oneuser passwd 1 1234 --sha1 $ oneuser show 1 | grep PASS PASSWORD : 7110eda4d09e062aa5e4a390b0a572ac0d2c0220 $ echo serveradmin:1234 ~/.one/sunstone_auth $ cat ~/.one/sunstone_auth serveradmin:1234 $ sunstone-server start Stale .lock detected. Erasing it. Error executing sunstone-server. Check /var/log/one/sunstone.error and /var/log/one/sunstone.log for more information ``` `/var/log/one/sunstone.log` ``` -- Server configuration -- {:vnc_proxy_support_wss=false, :vnc_proxy_cert=nil, :one_xmlrpc=http://localhost:2633/RPC2;, :marketplace_url=https://marketplace.c12g.com/appliance;, :vnc_proxy_key=nil, :debug_level=3, :vnc_proxy_path=/usr/share/one/websockify/websocketproxy.py, :core_auth=cipher, :host=127.0.0.1, :lang=en_US, :vnc_proxy_port=29876, :auth=sunstone, :port=9869, :tmpdir=/var/tmp} Tue Nov 20 08:32:16 2012 [E]: Error initializing authentication system Tue Nov 20 08:32:16 2012 [E]: [UserPoolInfo] User couldn't be authenticated, aborting call. ``` `/var/log/one/sunstone.error` is empty. Some guidance on how to debug further would be useful. Thank you. Cheers and Goodwill, v On Mon, Nov 19, 2012 at 10:37:02PM +0100, Ruben S. Montero wrote: Hi It is in plain password, try the following to recreate the serveradmin passwd and sunstone credentials: 1.- oneuser passwd 1 1234 --sha1 2.- echo serveradmin:1234 /var/lib/one/.one/sunstone_auth 3.- sunstone-server start Cheers Ruben On Mon, Nov 19, 2012 at 2:58 PM, Valentin Bud valentin@gmail.comwrote: Hello World, I have updated today from 3.6 to 3.8.1 from source on a Debian Squeeze machine. I didn't need nor want Sunstone until now. So I have followed the Sunstone documentation [1] to install and configure it. As `oneadmin` user when I try to start Sunstone I get the following the logs: ``` /var/log/one/sunstone.log -- Server configuration -- {:vnc_proxy_support_wss=false, :vnc_proxy_cert=nil, :one_xmlrpc=http://localhost:2633/RPC2;, :marketplace_url=https://marketplace.c12g.com/appliance;, :vnc_proxy_key=nil, :debug_level=3, :vnc_proxy_path=/usr/share/one/websockify/websocketproxy.py, :core_auth=cipher, :host=127.0.0.1, :lang=en_US, :vnc_proxy_port=29876, :auth=sunstone, :port=9869, :tmpdir=/var/tmp} Mon Nov 19 14:41:21 2012 [E]: Error initializing authentication system Mon Nov 19 14:41:21 2012 [E]: No such file or directory - /var/lib/one/.one/sunstone_auth ``` Indeed the file in missing. ``` oneadmin@frontend:~$ ls -al /var/lib/one/.one/sunstone_auth ls: cannot access /var/lib/one/.one/sunstone_auth: No such file or directory ``` It was missing even before I have updated to 3.8.1. I have created the file with the following contents: ``` /var/lib/one/.one/sunstone_auth serveradmin:af84cc76ff2f6bbede661a62f4932d739f0e1fb0 ``` The password part is the hashed serveradmin's key as shown by `oneuser show`. ``` $ oneuser show serveradmin | grep PASS PASSWORD : af84cc76ff2f6bbede661a62f4932d739f0e1fb0 ``` Trying to start the server again I receive the same error a little bit different: ``` ... Mon Nov 19 14:53:53 2012 [E]: Error initializing authentication system Mon Nov 19 14:53:53 2012 [E]: [UserPoolInfo] User couldn't be authenticated, aborting call. ``` I didn't know if $HOME/.one/sunstone_auth should list the hashed password or the clear text one, so I've given it one more try and set up the password in clear text. Same output as the one above. If it matters here goes the content of $HOME/.one directory: ``` oneadmin@frontend:~/.one$ ls -1 $HOME/.one one_auth sunstone_auth ``` [1]: http://opennebula.org/documentation:rel3.8:sunstone Any hints? Thank you. v ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Ruben S. Montero, PhD Project co-Lead and Chief Architect OpenNebula - The Open Source Solution for Data Center Virtualization www.OpenNebula.org | rsmont...@opennebula.org | @OpenNebula ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] VMs IP Clash after failure
Hi Carlos, On Tue, Nov 13, 2012 at 11:17:29AM +0100, Carlos Martín Sánchez wrote: Hi Valentin, That's exactly how it works now. When you set NIC = [ NETWORK_ID = vnet_id, IP = ip ] You are not hardcoding the IP, you are requesting that IP from the leases of the network. It will be assigned only if it is free, and will be marked as used, just like any other IP assigned automatically. You are right. I have done some test now and it works as you say it does. I meant something else though. I will take it step be step so I can make myself understood. I have one template with ID 17. ``` $ onetemplate show 17 | grep IP IP=172.16.18.54 ``` I clone the template and try to instantiate it and an error pops up telling me that the IP address is taken, which is the right thing to do. Now, as an user when I create a new template, either by `create` or `clone` I have to *remember* to modify the IP address in case it is in use. I have to find out which is the next usable IP addresses by looking at all templates. I can see the active leases using `onevnet show` by I don't have a way to know if the IP I choose to use in the new template is in use by other templates or VMs. That's not a problem, more of a user friendly thing. Cheer and Goodwill, v Regards -- Carlos Martín, MSc Project Engineer OpenNebula - The Open-source Solution for Data Center Virtualization www.OpenNebula.org | cmar...@opennebula.org | @OpenNebulahttp://twitter.com/opennebulacmar...@opennebula.org On Tue, Nov 13, 2012 at 10:39 AM, Valentin Bud valentin@gmail.comwrote: Hello Cyrille, I have noticed something this morning about IP addresses management inside OpenNebula and I thought I'd add my opinion. As far as I have noticed whenever you instantiate a machine OpenNebula uses the next IP address from the defined pool in the vnet. That's almost DHCP. One can check the status of the network using `onevnet show vnet_id`. Under `USED LEASES` one can see what leases are in use. If, for whatever reason, one wants a certain ip address to use for a particular VM, one can use `IP=x.x.x.x` in the template. I think here arises an issue. Human beings aren't good with numbers, that's one of the reasons we have DNS. I for one, forget which IP is allocated to which VM, I am talking about the ones I specify in the template using `IP`. I don't care which IP is allocated to which VM, I just care which IPs are allocated or, better said, reserved from the pool. I know there tools that perform the specific function of IP address management. OpenNebula could check if the IP address is in use whenever one creates/clones a template. The reserved IP address could be listed with `onevnet show vnet_id` under `RESERVED LEASES` or something. What do you people think about this? I also am aware that you guys welcome patches. I wish my Ruby skills were better so I could code this feature by myself. Speaking of Ruby, does anyone have some good docs for a noob? :) Cyrille, I hope I haven't hijacked your thread. I think the above thoughts are connected with your last question from your email. If that's not the case I am sorry. Cheers and Goodwill, v ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
[one-users] OpenNebula 3.8.1 ovswitch ovs-ofctl bad syntax for in_port
Hello World, I have updated OpenNebula to 3.8.1 from source on CentOS 6.3. I have noticed that ovswitch vnm now adds flows. That's very nice. I have never worked with flows from OpenvSwitch but I have read a little bit about it and they seem to bring a lot of (security) benefits. However the `post` ovswitch vnm script fails. ``` /var/log/one/oned.log Mon Nov 12 15:17:44 2012 [VMM][D]: Message received: LOG I 216 post: Executed sudo /usr/bin/ovs-vsctl set Port vnet0 tag=5. Mon Nov 12 15:17:44 2012 [VMM][D]: Message received: LOG I 216 ovs-ofctl: vlan5 is not a bridge or a socket Mon Nov 12 15:17:44 2012 [VMM][D]: Message received: LOG E 216 post: Command sudo /usr/bin/ovs-ofctl add-flow vlan5 in_port=,dl_src=02:00:0a:80:05:32,priority=4,actions=normal failed. Mon Nov 12 15:17:44 2012 [VMM][D]: Message received: LOG E 216 post: ovs-ofctl: dl_src=02:00:0a:80:05:32: bad syntax for in_port Mon Nov 12 15:17:44 2012 [VMM][D]: Message received: LOG E 216 ovs-ofctl: dl_src=02:00:0a:80:05:32: bad syntax for in_port ``` The vlan5 port on this particular switch is a `fake bridge`. Linux bridge compatibility layer is enabled as requested by the docs [1]. ``` $ ps aux | grep openvswitch root 1247 0.0 0.0 39768 2112 ?Ss 10:13 0:02 ovsdb-server /etc/openvswitch/conf.db -vconsole:emer -vsyslog:err ... root 1259 0.0 0.0 40924 7812 ?SLs 10:13 0:10 ovs-vswitchd unix:/var/run/openvswitch/db.sock -vconsole:emer ... root 1280 0.0 0.0 39728 1088 ?Ss 10:13 0:00 ovs-brcompatd -vconsole:emer -vsyslog:err -vfile:info --no-chdir ... $ _ ovs-vsctl show Port vlan5 tag: 5 Interface vlan5 type: internal ``` sudo rules are in place for `oneadmin` to be able to execute ovs-* commands without tty and password. The OpenFlow default rules employed by OpenNebula state on the doc [2] page the following: `These rules prevent any traffic to come out of the port the MAC address has changed.` Does this mean that traffic coming out of the port OpenNebula just created on the switch is denied/dropped? Or does it mean that traffic with the source MAC address of the VM should only come in on the specified port, the newly added port for the VM? Maybe I'm wrong, but at a glance flows can improve security and thus are a huge improvement to OpenNebula. Do you, devs and users, think that it would be better to separate the flow definition from code? For now if we want to add flow rules we have to modify `/var/lib/one/remotes/vnm/ovswitch/OpenvSwitch.rb`. I think it would be wise to have separate files with rules based on the users' need. `ovs-ofctl` can add flows directly from a file so it wouldn't be very hard, I guess, to separate the flows from the code. Of course one can do this by himself by modifying `post`. One more thing, why does OpenNebula need the Linux bridge compatibility layer enabled? Can anybody shed some light? [1]: http://opennebula.org/documentation:rel3.8:openvswitch#hosts_configuration [2]: http://opennebula.org/documentation:rel3.8:openvswitch#openflow_rules Thanks. Cheers and Goodwill, v ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Failed to delete VM
Hello Cyrille, On Fri, Nov 09, 2012 at 11:11:50AM +0100, Duverne, Cyrille wrote: Hello, A dirty nasty DELETE from DB worked as a charm :) You could have used `onevm` CLI program. ``` $ onevm delete `vm_id` ``` I don't know if deleting a VM form the DB is safe, using one* CLI commands is definitely safe. Cheers and Goodwill, v Cheers Cyrille ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] ERROR: failed to get domain
Hello, The double // hash nothing to do with your error, at least as far as I have noticed. Can you provide more info from your log? First thing that comes to mind, are the permissions of /var/lib/one correct? v On Wed, Oct 17, 2012 at 8:12 AM, Kannadhasan Thangadurai kannadhasa...@payoda.com wrote: Hi Jorge, Kindly let me know, in which config file i have to change (//) the path. Please guide me. Thanks, Kannadhasan Thangadurai. On Tue, Oct 16, 2012 at 8:07 PM, Jorge Mario Manuel Ortega ortegajo...@gmail.com wrote: check the double slash after /var/lib/one, ( i put it in bold text ) error: Domain not found: no domain with matching name '/var/lib/one*//* datastores/0/23/deployment.1' * * Live long , and Prosper .- 2012/10/16 Kannadhasan Thangadurai kannadhasa...@payoda.com Hi Team, I have installed Opennebula 3.6 on centos 6.3. I have created a image, getting the below error while the image status is changing from PROLOG to BOOT state. *failed to get domain '/var/lib/one//datastores/0/23/deployment.1'* * * *error: Domain not found: no domain with matching name '/var/lib/one//datastores/0/23/deployment.1'* * * *ExitCode: 0* After this error message, the image changed to *UNKNOWN* state. Kindly help me on this issue. Thanks, Kannadhasan Thangadurai. ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] The type of processor required for OpenNebula setup
Hello there, OpenNebula *doens't* require a certain CPU. The OpenNebula frontend doesn't even require x86_64 architecture. It can be installed on i386 also. The virtualization hypervisor in use require a certain CPU, or more specifically, a certain extension, the virtualization one. Since you are asking about CentOS 6 I think you'll use either KVM or Xen. As far as I know they both work with either Intel or AMD CPUs with virtualization extensions. Maybe AMD names this different, I don't know. Cheers and Good Will, v On Wed, Sep 19, 2012 at 1:49 PM, Qiubo Su (David Su) qiub...@gmail.comwrote: Dear OpenNebula Team, I want to download OpenNebula and see there are options like OpenNebula 3.2.1 Ubuntu 10.0.4 amd64 and OpenNebula 3.2.1 CentOS 6.0 x86_64. For OpenNebula 3.2.1 Ubuntu 10.0.4 amd64, we have to buy AMD processor, but for OpenNebula 3.2.1 CentOS 6.0 x86_64, what type of processor should we buy? Thanks, Q.S. ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Failed to create VM domain
Hi, I have managed to get OpenNebula running on a CentOS 6.3 host. First thing that comes in mind is polkit. I honestly don't remember the exact error I have received but I do know that OpenNebula didn't work until I have configure polkit. # cat /etc/polkit-1/localauthority/50-local.d/50-org.libvirt.unix.manage-opennebula.pkla [Allow oneadmin user to manage virtual machines] Identity=unix-user:oneadmin Action=org.libvirt.unix.manage #Action=org.libvirt.unix.monitor ResultAny=yes ResultInactive=yes ResultActive=yes Also qemu is configured to run under oneadmin user and group # cat /etc/libvirt/qemu.conf | egrep ^user|^group user = oneadmin group = oneadmin Another problem I have encountered using libvirt installed from rpm is that the VMs won't start complaining that some lsi device is missing if you use IDE or SCSI disks. This is easy, just go with virtio devices. I think the simplest way to setup this would be to edit /etc/one/oned.conf and change DEFAULT_DEVICE_PREFIX. DEFAULT_DEVICE_PREFIX = vd Other than that I've had no problems. Well one more, but that's specific to my environment. I user OpenNebula with RVM for Ruby and the init.d script couldn't find the ruby executable. For everyone who's interested in this I have added the RVM ruby path to $PATH in oned init.d script: # Source function library. . /etc/rc.d/init.d/functions RETVAL=0 PATH=$PATH:/usr/local/rvm/rubies/ruby-1.8.7-p370/bin/ Oh, one more thing, I have installed OpenNebula 3.6 from source. Good Will, v On Mon, Sep 10, 2012 at 11:25 AM, Virginia Martín-Rubio Pascual virginia.martinru...@rediris.es wrote: Hi, OpenNebula is running in CentOS 6.3. The /var/log/libvirt/qemu/one-9.log file doesn't exist in the host. When I create a virtual machine manually in the host (with virt-manager) a /var/log/libvirtt/qemu/newVM.log file is created, but when the virtual machine is created with OpenNebula, this file doesn't appear... The libvirtd.conf contains these lines: *log_level = 1* *log_filters=1:libvirt 1:util 1:qemu* *log_outputs=1:file:/var/log/libvirt/libvirtd.log* Cheers, Virginia. El 07/09/2012, a las 15:25, Valentin Bud escribió: Hi, What OS are you running OpenNebula on and what does /var/log/libvirt/qemu/one-9.log outputs? Good Will, v On Fri, Sep 7, 2012 at 4:06 PM, Virginia Martín-Rubio Pascual virginia.martinru...@rediris.es wrote: Hi, When I try to instantiate a VM template, I obtain this error: ... Fri Sep 7 14:48:27 2012 [DiM][I]: New VM state is ACTIVE. Fri Sep 7 14:48:27 2012 [LCM][I]: New VM state is PROLOG. Fri Sep 7 14:48:27 2012 [VM][I]: Virtual Machine has no context Fri Sep 7 14:48:32 2012 [TM][I]: clone: Cloning ../../103/87a4fbf0be08256920937769ee8c8b9e in democrito.admin.rediris.es:/var/lib/one/datastores/0/9/disk.0 Fri Sep 7 14:48:32 2012 [TM][I]: ExitCode: 0 Fri Sep 7 14:51:24 2012 [TM][I]: clone: Cloning ../../104/274b59f9fd0b1c200cead63ad81c5035 in democrito.admin.rediris.es:/var/lib/one/datastores/0/9/disk.1 Fri Sep 7 14:51:24 2012 [TM][I]: ExitCode: 0 Fri Sep 7 14:51:24 2012 [LCM][I]: New VM state is BOOT Fri Sep 7 14:51:24 2012 [VMM][I]: Generating deployment file: /var/lib/one/9/deployment.8 Fri Sep 7 14:51:25 2012 [VMM][I]: ExitCode: 0 Fri Sep 7 14:51:25 2012 [VMM][I]: Successfully execute network driver operation: pre. Fri Sep 7 14:51:25 2012 [VMM][I]: Command execution fail: cat EOT | /var/lib/one/remotes/vmm/kvm/deploy /var/lib/one/datastores/0/9/deployment.8 democrito.admin.rediris.es 9 democrito.admin.rediris.es Fri Sep 7 14:51:25 2012 [VMM][I]: error: Failed to create domain from /var/lib/one/datastores/0/9/deployment.8 Fri Sep 7 14:51:25 2012 [VMM][I]: error: An error occurred, but the cause is unknownFri Sep 7 14:51:25 2012 [VMM][E]: Could not create domain from /var/lib/one/datastores/0/9/deployment.8 Fri Sep 7 14:51:25 2012 [VMM][I]: ExitCode: 255 Fri Sep 7 14:51:25 2012 [VMM][I]: Failed to execute virtualization driver operation: deploy.Fri Sep 7 14:51:25 2012 [VMM][E]: Error deploying virtual machine: Could not create domain from /var/lib/one/datastores/0/9/deployment.8 Fri Sep 7 14:51:25 2012 [DiM][I]: New VM state is FAILED I've tried to create the domain manually in the host but I've obtained the same error: $ virsh -d 0 create /var/lib/one/datastores/0/9/deployment.8 create: file(optdata): /var/lib/one/datastores/0/9/deployment.8 error: Failed to create domain from /var/lib/one/datastores/0/9/deployment.8 error: An error occurred, but the cause is unknown I've looked the libvirtd.log file in the host, but I haven't found any relevant information about the problem (I've changed the log_level to debug)... Does anyone have any idea about what could be the reason of this error? Thanks in advance. Cheers, Virginia. ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users
Re: [one-users] BUG report: rotten file permissions in /etc/one
Hello World, I can confirm this on CentOS 6.3 installed from source. Used -u oneadmin -g oneadmin. Thanks Matthew for noticing this. Good Will, v On Fri, Sep 7, 2012 at 1:07 PM, Matthew Patton mpat...@inforelay.comwrote: In the CentOS/RHEL rpm at least. several files contain username and passwords. Yet the files are mode 644. They should be mode 640 and gid=oneadmin. Similarly for directories. -- Cloud Services Architect, Senior System Administrator InfoRelay Online Systems (www.inforelay.com) __**_ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/**listinfo.cgi/users-opennebula.**orghttp://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Failed to create VM domain
Hi, What OS are you running OpenNebula on and what does /var/log/libvirt/qemu/one-9.log outputs? Good Will, v On Fri, Sep 7, 2012 at 4:06 PM, Virginia Martín-Rubio Pascual virginia.martinru...@rediris.es wrote: Hi, When I try to instantiate a VM template, I obtain this error: ... Fri Sep 7 14:48:27 2012 [DiM][I]: New VM state is ACTIVE. Fri Sep 7 14:48:27 2012 [LCM][I]: New VM state is PROLOG. Fri Sep 7 14:48:27 2012 [VM][I]: Virtual Machine has no context Fri Sep 7 14:48:32 2012 [TM][I]: clone: Cloning ../../103/87a4fbf0be08256920937769ee8c8b9e in democrito.admin.rediris.es:/var/lib/one/datastores/0/9/disk.0 Fri Sep 7 14:48:32 2012 [TM][I]: ExitCode: 0 Fri Sep 7 14:51:24 2012 [TM][I]: clone: Cloning ../../104/274b59f9fd0b1c200cead63ad81c5035 in democrito.admin.rediris.es:/var/lib/one/datastores/0/9/disk.1 Fri Sep 7 14:51:24 2012 [TM][I]: ExitCode: 0 Fri Sep 7 14:51:24 2012 [LCM][I]: New VM state is BOOT Fri Sep 7 14:51:24 2012 [VMM][I]: Generating deployment file: /var/lib/one/9/deployment.8 Fri Sep 7 14:51:25 2012 [VMM][I]: ExitCode: 0 Fri Sep 7 14:51:25 2012 [VMM][I]: Successfully execute network driver operation: pre. Fri Sep 7 14:51:25 2012 [VMM][I]: Command execution fail: cat EOT | /var/lib/one/remotes/vmm/kvm/deploy /var/lib/one/datastores/0/9/deployment.8 democrito.admin.rediris.es 9 democrito.admin.rediris.es Fri Sep 7 14:51:25 2012 [VMM][I]: error: Failed to create domain from /var/lib/one/datastores/0/9/deployment.8 Fri Sep 7 14:51:25 2012 [VMM][I]: error: An error occurred, but the cause is unknownFri Sep 7 14:51:25 2012 [VMM][E]: Could not create domain from /var/lib/one/datastores/0/9/deployment.8 Fri Sep 7 14:51:25 2012 [VMM][I]: ExitCode: 255 Fri Sep 7 14:51:25 2012 [VMM][I]: Failed to execute virtualization driver operation: deploy.Fri Sep 7 14:51:25 2012 [VMM][E]: Error deploying virtual machine: Could not create domain from /var/lib/one/datastores/0/9/deployment.8 Fri Sep 7 14:51:25 2012 [DiM][I]: New VM state is FAILED I've tried to create the domain manually in the host but I've obtained the same error: $ virsh -d 0 create /var/lib/one/datastores/0/9/deployment.8 create: file(optdata): /var/lib/one/datastores/0/9/deployment.8 error: Failed to create domain from /var/lib/one/datastores/0/9/deployment.8 error: An error occurred, but the cause is unknown I've looked the libvirtd.log file in the host, but I haven't found any relevant information about the problem (I've changed the log_level to debug)... Does anyone have any idea about what could be the reason of this error? Thanks in advance. Cheers, Virginia. ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
[one-users] open /dev/kvm: Permission denied on Debian Squeeze
Hello, I have recently setup OpenNebula 3.6 on 2 Debian Squeeze boxes. Both machines function as hosts, one of them as frontend. I have followed the OpenNebula documentation [1] to setup the KVM driver on the hosts. ### Host A # kvm --version QEMU PC emulator version 0.12.5 (qemu-kvm-0.12.5), Copyright (c) 2003-2008 Fabrice Bellard # virsh --version 0.8.3 # grep -vE '^($|#)' /etc/libvirt/qemu.conf user = oneadmin group = oneadmin dynamic_ownership = 0 # id oneadmin uid=1001(oneadmin) gid=1001(oneadmin) groups=1001(oneadmin),106(kvm),108(libvirt) # ls -al /dev/kvm crw-rw 1 root kvm 10, 232 Jul 25 11:23 /dev/kvm ### Host B # kvm --version QEMU PC emulator version 0.12.5 (qemu-kvm-0.12.5), Copyright (c) 2003-2008 Fabrice Bellard # virsh --version 0.8.3 # grep -vE '^($|#)' /etc/libvirt/qemu.conf user = oneadmin group = oneadmin dynamic_ownership = 0 # id oneadmin uid=1001(oneadmin) gid=1001(oneadmin) groups=1001(oneadmin),106(kvm),108(libvirt) # ls -al /dev/kvm crw-rw 1 root kvm 10, 232 Jul 25 11:23 /dev/kvm Doesn't matter on which host the VM gets deployed, the error is the same. The error follows: LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin HOME=/root USER=root LOGNAME=root QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 1024 -smp 1,sockets=1,cores=1,threads=1 -name one-4 -uuid a7db4cd7-e258-503a-cc57-59d2dc1135ea -nodefaults -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/one-4.monitor,server,nowait -mon chardev=monitor,mode=readline -rtc base=utc -boot c -device lsi,id=scsi0,bus=pci.0,addr=0x5 -drive file=/var/lib/one/datastores/0/4/disk.0,if=none,id=drive-scsi0-0-0,boot=on,format=qcow2 -device scsi-disk,bus=scsi0.0,scsi-id=0,drive=drive-scsi0-0-0,id=scsi0-0-0 -device rtl8139,vlan=0,id=net0,mac=02:00:0a:41:02:65,bus=pci.0,addr=0x3 -net tap,fd=36,vlan=0,name=hostnet0 -device rtl8139,vlan=1,id=net1,mac=02:00:0a:41:03:65,bus=pci.0,addr=0x4 -net tap,fd=37,vlan=1,name=hostnet1 -usb -vnc 0.0.0.0:4 -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 open /dev/kvm: Permission denied Could not initialize KVM, will disable KVM support I don't have AppArmor installed or any other tool of this kind. It's just a stock Debian Squeeze install. OpenNebula was compiled on another machine and installed on Host A which is the frontend. Does anyone of you have any idea about this error? I have Googled around but could not find any answer. Everything seems correct as per OpenNebula KVM documentation [1]. Thank you for your time invested in reading this email. Cheers and Goodwill, Valentin Bud [1]: http://opennebula.org/documentation:rel3.6:kvmg -- w: http://ing.enia.re/ http://databus.ro/blog in: http://www.linkedin.com/pub/valentin-bud/9/881/830 t: https://twitter.com/valentinbud ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] open /dev/kvm: Permission denied on Debian Squeeze
Hello Javier, Thank you for your answer. Indeed, changing /dev/kvm group to oneadmin solved the problem. I can now happily launch VMs on the Cloud. For future references, to make the change persist after reboot I have done: # cat /etc/udev/rules.d/60-qemu-kvm.rules KERNEL==kvm, GROUP=oneadmin, MODE=0660 I have also read the man page for udev to get this right. Learned something new today :). Maybe the OpenNebula developers could add this to the documentation in the KVM Configuration [1] section as a note for Debian Squeeze. NOTE: On Debian Squeeze when creating a VM as a regular user, the only group that is taken into account is the one that appears as 'gid' (oneadmin in this case). To solve the problem change the owner of /dev/kvm 'to root.oneadmin. To make the changes persist after reboot: # cat /etc/udev/rules.d/60-qemu-kvm.rules KERNEL==kvm, GROUP=oneadmin, MODE=0660 [1] - http://opennebula.org/documentation:rel3.6:kvmg Cheers and Goodwill, Valentin Bud On Wed, Jul 25, 2012 at 1:15 PM, Javier Alvarez javier.alva...@bsc.eswrote: Hello Valentin, Apparently, when creating a VM as a regular user, the only group that is taken into account is the one that appears as 'gid' (oneadmin in this case). So what I did to solve the problem was to change the owner of /dev/kvm to root.oneadmin. Best, Javi On 25/07/12 11:19, Valentin Bud wrote: Hello, I have recently setup OpenNebula 3.6 on 2 Debian Squeeze boxes. Both machines function as hosts, one of them as frontend. I have followed the OpenNebula documentation [1] to setup the KVM driver on the hosts. ### Host A # kvm --version QEMU PC emulator version 0.12.5 (qemu-kvm-0.12.5), Copyright (c) 2003-2008 Fabrice Bellard # virsh --version 0.8.3 # grep -vE '^($|#)' /etc/libvirt/qemu.conf user = oneadmin group = oneadmin dynamic_ownership = 0 # id oneadmin uid=1001(oneadmin) gid=1001(oneadmin) groups=1001(oneadmin),106(kvm),108(libvirt) # ls -al /dev/kvm crw-rw 1 root kvm 10, 232 Jul 25 11:23 /dev/kvm ### Host B # kvm --version QEMU PC emulator version 0.12.5 (qemu-kvm-0.12.5), Copyright (c) 2003-2008 Fabrice Bellard # virsh --version 0.8.3 # grep -vE '^($|#)' /etc/libvirt/qemu.conf user = oneadmin group = oneadmin dynamic_ownership = 0 # id oneadmin uid=1001(oneadmin) gid=1001(oneadmin) groups=1001(oneadmin),106(kvm),108(libvirt) # ls -al /dev/kvm crw-rw 1 root kvm 10, 232 Jul 25 11:23 /dev/kvm Doesn't matter on which host the VM gets deployed, the error is the same. The error follows: LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin HOME=/root USER=root LOGNAME=root QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 1024 -smp 1,sockets=1,cores=1,threads=1 -name one-4 -uuid a7db4cd7-e258-503a-cc57-59d2dc1135ea -nodefaults -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/one-4.monitor,server,nowait -mon chardev=monitor,mode=readline -rtc base=utc -boot c -device lsi,id=scsi0,bus=pci.0,addr=0x5 -drive file=/var/lib/one/datastores/0/4/disk.0,if=none,id=drive-scsi0-0-0,boot=on,format=qcow2 -device scsi-disk,bus=scsi0.0,scsi-id=0,drive=drive-scsi0-0-0,id=scsi0-0-0 -device rtl8139,vlan=0,id=net0,mac=02:00:0a:41:02:65,bus=pci.0,addr=0x3 -net tap,fd=36,vlan=0,name=hostnet0 -device rtl8139,vlan=1,id=net1,mac=02:00:0a:41:03:65,bus=pci.0,addr=0x4 -net tap,fd=37,vlan=1,name=hostnet1 -usb -vnc 0.0.0.0:4 -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 open /dev/kvm: Permission denied Could not initialize KVM, will disable KVM support I don't have AppArmor installed or any other tool of this kind. It's just a stock Debian Squeeze install. OpenNebula was compiled on another machine and installed on Host A which is the frontend. Does anyone of you have any idea about this error? I have Googled around but could not find any answer. Everything seems correct as per OpenNebula KVM documentation [1]. Thank you for your time invested in reading this email. Cheers and Goodwill, Valentin Bud [1]: http://opennebula.org/documentation:rel3.6:kvmg -- w: http://ing.enia.re/ http://databus.ro/blog in: http://www.linkedin.com/pub/valentin-bud/9/881/830 t: https://twitter.com/valentinbud ___ Users mailing listUsers@lists.opennebula.orghttp://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Javier Álvarez Cid-Fuentes Grid Computing and Clusters Group Barcelona Supercomputing Center (BSC-CNS) Tel. (+34) 93 413 72 46 WARNING / LEGAL TEXT: This message is intended only for the use of the individual or entity to which it is addressed and may contain information which is privileged, confidential, proprietary, or exempt from disclosure under applicable law. If you are not the intended recipient or the person responsible for delivering the message to the intended recipient, you are strictly prohibited from disclosing, distributing, copying, or in any
Re: [one-users] Instructions for preparing Debian squeeze images for One
On Sat, May 19, 2012 at 12:38 AM, Olivier Berger olivier.ber...@it-sudparis.eu wrote: Hi. FYI, I've added some bits to http://wiki.debian.org/OpenNebula/PreparingDebianVmImage in order to try and help document the bits needed for preparing a Debian image for OpenNebula. Nothing really fancy or new, but I thought it wouldn't harm adding it to Debian's wiki, since OpenNebula packaging will be available in the next release (wheezy) soon freezed. Hope this helps. Best regards. Hello, That's a very useful resource. Thank you for taking time to write it. Cheers and Goodwill, |e -- w: http://databus.ro/blog in: http://www.linkedin.com/pub/valentin-bud/9/881/830 t: https://twitter.com/valentinbud ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
[one-users] OpenNebula CloudWorkshop Romania Timișoara
Hello Community, My name is Valentin Bud. I have started using OpenNebula a few months ago. I thank you for this great you are developing. Words cannot really express gratitude. I have been playing with OpenNebula until now, no production deployments but those are about to follow really soon. I like the tool because it gives you flexibility. What I love about it is that I can do a lot of things using shell scripts. In 16.02 February I am going to host a CloudWorkshop for the local business and student communities. In the first part of the Workshop I am going to talk about the Cloud from a business perspective and how could the Cloud help small companies or startups. The second part is dedicated to the students from the local technical university Universitatea Politehnica Timișoara. I am about to build an email Cloud using OpenNebula for my personal use, and present OpenNebula and the email Cloud as a case study. This workshop is the first from a series that I want to make around Romania in cities were are the most recognized technical universities, cities like Cluj, Iași, București. Cloud Computing is the future and I want to promote this concept throughout Romania so we don't remain behind on this. In cities were are technical universities is a concentration of creative and innovative minds among the students. Why am I telling you all this? As an appreciation for the work this community has put into OpenNebula. If I am going to use OpenNebula you should know about it. It is you, the community, who made this possible. I don't want to make marketing on this mailing lists, and if I am crossing the line with this please remove the message, but the event has a facebook page at the following link - https://www.facebook.com/pages/CloudWorkshop/137624213023654. The page is in Romanian though. Have a wonderful day people, -- w: http://databus.ro/blog in: http://www.linkedin.com/pub/valentin-bud/9/881/830 t: https://twitter.com/valentinbud ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Mixed mode UBUNTU+ RH KVM
Hello Nikolay, On Wed, Dec 14, 2011 at 4:06 PM, Nikolay Gar nikolay@gmail.com wrote: Hello folks ., Quick question .. If it possible to use as FrontEnd based on Ubuntu 11.04 and 2 Cluster Nodes on RH ES 6.0 as KVM nodes? Thank a lot Nikolay SSH is used to communicate with the nodes in order to either transfer the VM image, if SSH Transfer Manager is being used of course, or to gather status information about the load/health of the node. If you are using NFS as storage backend the nodes must have a compatible NFS client in order to mount the shares. libvirt is setup on the nodes in order to start/stop/deploy VMs on top of KVM. SSH, NFS are protocols and libvirt is a standard library, so your mix should work. I'm not trying to be condescending or something, just that I'm approaching your question using the mentioned logic. The docs don't specifically talk about mixing distributions. Enjoy life, v -- w: http://databus.ro/blog in: http://www.linkedin.com/pub/valentin-bud/9/881/830 t: https://twitter.com/valentinbud ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Definition Template File Extensions
On Mon, Dec 5, 2011 at 8:46 AM, Vickreman Chettiar vickre...@gmail.comwrote: Hi. I'm currently in the planning/designing stages of setting up an OpenNebula-based public cloud. I'm currently drawing up the necessary definition templates, which for the time being I have saved as ODT files. I'm wondering what are the file extensions with which each of the following Definition Templates should be saved, for deployment in an OpenNebula cloud. - Virtual Machine Definition Template - Image Definition Template - Virtual Network Definition File Regards to all fellow OpenNebula users. Vickreman Chettiar http://sg.linkedin.com/in/vickremanchettiar I'm quite particular about particular fields of quiet physics such as particular physics. ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org Hi Vickreman, I think it doesn't really matter the file extension. It does matter only to you, to identify them internally. For example I use *.vm.tpl for Virtual Machine Definition Template, *.img.tpl for Image and *.vnet.tpl for Virtual Network. I could be wrong so others who know better please step in and clarify. Have a great day, Valentin Bud --- w: http://databus.ro/blog in: http://www.linkedin.com/pub/valentin-bud/9/881/830 t: https://twitter.com/valentinbud ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] [Question]Can Opennebula run on VMWare machine?
On Mon, Dec 5, 2011 at 6:47 AM, cat fa boost.subscrib...@gmail.com wrote: I wonder whether it is possible to run the opennebula on a host machine? 2011/12/5 Ruben S. Montero rsmont...@opennebula.org Hi There should be no problem to run the opennebula front-end in a virtual machine. Check that you have ssh access to the 192.168.1.2 host. Cheers ruben On Sat, Dec 3, 2011 at 3:28 PM, cat fa boost.subscrib...@gmail.comwrote: There is a VMWare Workstation Server in our lab. We don't have enough physical computers, so we create virtual machines on vmware and setup Opennebula. However, I cannot create host. I used the onehost create 192.168.1.2 im_kvm vmm_kvm tm_shared command, but the state of hosts was always error. I don't know whether it's ok to run Opennebula on a virtual machine? ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Ruben S. Montero, PhD Project co-Lead and Chief Architect OpenNebula - The Open Source Toolkit for Data Center Virtualization www.OpenNebula.org | rsmont...@opennebula.org | @OpenNebula ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org Hi, Yes, you can run OpenNebula on a host machine. I think it's the best way to run OpenNebula. You can run OpenNebula and VMWare Workstation Server hypervisor on the same machine, but not on ESX(i) hypervisor host because that's a bare metal hypervisor. Have a great day, -- w: http://databus.ro/blog in: http://www.linkedin.com/pub/valentin-bud/9/881/830 t: https://twitter.com/valentinbud ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org