Re: [one-users] DNS search domains and rc.local not being read in VMs

2013-10-03 Thread Evgeniy Suvorov
Hello,
`And define the $searchdomain variable in your network template`
Where i can find network template?


2013/9/26 Campbell, Bill bcampb...@axcess-financial.com

 There should be DNS scripts in the contextualization packages that provide
 some assignment of DNS.  You should be able to slightly modify a script to
 pick up that variable you define, i.e.

 echo search $searchdomain  /etc/resolv.conf

 And define the $searchdomain variable in your network template.


 Be advised, I only see this script on the Ubuntu context packages (not
 RHEL/Cent, but could be added either way)
 --
 *From: *jerry steele jerry.ste...@cgg.com
 *To: *users@lists.opennebula.org
 *Sent: *Thursday, September 26, 2013 6:42:23 AM
 *Subject: *[one-users] DNS search domains and rc.local not being read in
 VMs


  Hello,



 I posted a few days ago about network configuration seemingly not being
 picked up properly – I think that was mainly due to my contextualisation
 packages being for the wrong version, and so the context CDROM device
 wasn’t being mounted.



 So now I have a separate,, but related issue.



 I need to be able to have DNS search domains in my /etc/resolv.conf. I
 tried to achieve this by adding a SEARCH attribute to the virtual network
 template, but it doesn’t seem to be picked up (looking in the context
 scripts, there’s nothing there to pick it up, so that’s kind of expected).
 I then tried to add it in /etc/rc.local, along with some lines to correctly
 set the hostname based on IP,  but this doesn’t appear to be being picked
 up either.



 Could anyone tell me what might be going wrong here? Could it be the case
 that rc.local is not being run for some reason at boot? When I test the VM
 outside ONE, rc.local runs fine…



 Any help greatly appreciated.



 Thanks

 *__** **__** **__***

 * *

 * *

 *Jerry Steele*

 IT Support Specialist

 Subsurface Imaging



 CGGVeritas Services (UK) Ltd

 Crompton Way

 Crawley

 W Sussex

 RH10 9QN



 T  +44 (0)1293 683264 (Internal: 233264)

 M +44 (0)7920 237105



 www.cgg.com



 *This email and any accompanying attachments are confidential. The
 information
 is intended solely for the use of the individual to whom it is addressed.
 Any review,
 disclosure, copying, distribution, or use of the email by others is
 strictly prohibited.*



 *This email and any accompanying attachments are confidential. If you
 received this email by mistake, please delete
 it from your system. Any review, disclosure, copying, distribution, or use
 of the email by others is strictly prohibited.*

 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


 *NOTICE: Protect the information in this message in accordance with the
 company's security policies. If you received this message in error,
 immediately notify the sender and destroy all copies.*


 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




-- 
Regards, Evgeniy.

Tel.: +79060665574
ICQ: 380264507
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Problem with Openvswitch on OpenNebula3.8

2013-10-03 Thread Valentin Bud
Hey there,

I have the following setup working really well.

OS
--

$ uname -a 
Linux godzilla 3.2.0-4-amd64 #1 SMP Debian 3.2.46-1+deb7u1 x86_64
GNU/Linux


KVM LIBVIRT
--

dpkg -l | egrep libvirt|kvm
ii  libsys-virt-perl  0.9.12-2
amd64Perl module providing an extension for the libvirt library
ii  libvirt-bin   0.9.12-11+deb7u1
amd64programs for the libvirt library
ii  libvirt0  0.9.12-11+deb7u1
amd64library for interfacing with different virtualization
systems
ii  qemu-kvm  1.1.2+dfsg-6
amd64Full virtualization on x86 hardware


OPENVSWITCH
--

$ dpkg -l | grep openvswitch
ii  openvswitch-common1.11.0-1
amd64Open vSwitch common components
ii  openvswitch-datapath-dkms 1.11.0-1
all  Open vSwitch datapath module source - DKMS version
ii  openvswitch-switch1.11.0-1
amd64Open vSwitch switch implementations


OPENNEBULA
--
 
$ dpkg -l | grep opennebula
ii  opennebula4.2.0-1
amd64controller which executes the OpenNebula cluster services
ii  opennebula-common 4.2.0-1
all  empty package to create OpenNebula users and directories
ii  opennebula-node   4.2.0-1
all  empty package to prepare a machine as OpenNebula Node
ii  opennebula-sunstone   4.2.0-1
all  web interface to which executes the OpenNebula cluster
services
ii  opennebula-tools  4.2.0-1
all  Command-line tools for OpenNebula Cloud
ii  ruby-opennebula   4.2.0-1
all  Ruby bindings for OpenNebula Cloud API (OCA)


On Thu, Oct 03, 2013 at 09:53:44AM +0800, 木易@'4武  wrote:
 Thank you very much,
 It work ,I forgot to define the network of the host.
 And last,one question,can openvswitch 1.10 or higher be used for openNebula? 
 3.8 and 4.2
 I found that brcompat mod is no longer exist.

The 3.8 Open vSwitch documentation [1] does mention that the bridge
compatibility layer be installed.

The 4.2 Open vSwitch documentation [2] says that KVM doesn't need Linux
bridge compatibility layer. Although XEN does need it and a different
network manager driver, ovswitch_brcompat, is provided for this scenario.

Previous the above setup I've ran Debian Squeeze with libvirt, qemu from
backports and Open vSwitch 1.10 with OpenNebula 3.8 branch. It worked
without bridge compatibility layer.

I suggest you use the latest version from OpenNebula. I have noticed
that Open vSwitch releases are stable and easy to package for DEB/RPM
systems so I try to keep up with the latest release. 


[1]: http://opennebula.org/documentation:archives:rel3.8:openvswitch
[2]: http://opennebula.org/documentation:rel4.2:openvswitch

Good Will,
Valentin

 
 
 -- 原始邮件 --
 发件人: Valentin Bud;valentin@gmail.com;
 发送时间: 2013年10月3日(星期四) 凌晨0:05
 收件人: 木易@'4武 yangz...@qq.com; 
 抄送: usersusers@lists.opennebula.org; 
 主题: Re: 回复: [one-users] Problem with Openvswitch on OpenNebula3.8
 
 
 
 Hi there,
 
 You should create the second host on the frontend with something like the 
 following:
 
 
 # onehost create second_host -i kvm -v kvm -n ovswitch
 
  
 Have you done that already?
 
 
 Then on the second machine you must have an OVS bridge created with the name 
 as specified in
 the vnet you have inside OpenNebula.
  
  Good Will,
 
 On Wed, Oct 2, 2013 at 6:39 PM, 木易@'4武 yangz...@qq.com wrote:
  Thanks for your reply,
 1、Should I configure the driver in the second host? The network drive only be 
 set in the frontend.
  I had configure lieted bellow environment in the second host,
 ruby environment;
 kvm environment;
 oneadmin user with no password to use sudo;
 no password access frontend each other;
  ovs 1.9.0 version with brcompat;
 2、Yes, I can make ovs bridge create on the second machine, Using no-password 
 ssh.
 Normally,the network driver dedined in the frontend,and frontend use ssh to  
 execute the ovs command through ssh.
  but, when I build a VM in the second host frontend.It did nothing about it.
  原始邮件 --
  发件人: Valentin Bud;valentin@gmail.com;
 发送时间: 2013年10月2日(星期三) 晚上10:57
  收件人: 木易@'4武 yangz...@qq.com; 
 抄送: usersusers@lists.opennebula.org; 
  主题: Re: [one-users] Problem with Openvswitch on OpenNebula3.8
 
 
 
 Hello,
 Is your second host configured to use the ovswitch network driver? Is the ovs 
 bridge created on the second machine with 
  the name used in the vnet you've defined? Is you sudo configured to allow 
 oneadmin to issue ovs-* commands without password?
  
 On Wed, Oct 2, 2013 at 5:38 PM, 木易@'4武 yangz...@qq.com wrote:
  Hi,   I’m trying to Configure OVS network in OpenNebula3.8,and has been 
 succeed it in the frontend host(frontendhost-end together).But When I build 
 a VM in another host,it can't add a ovs bridge.
  In normal, it 

Re: [one-users] OpenNebula and DHCP Server

2013-10-03 Thread Valentin Bud
Hello Fazli,

I will make some assumptions about your infrastructure and provide 
possible approach(es).

* Your KVM nodes have a single Ethernet interface, eth0, connected in a
  switch and a router used as the default gateway for the 192.168.1/24
  network,

* Also the frontend is connected via the same switch with the rest of
  the nodes,

* You have a br0 bridge with eth0 connected to it on each node and also
  the frontend,

* Your frontend is also a node.

If you have access to the router the simplest way would be to add an IP
Address alias on the router interface as the default gateway for the new
network. 

Configure a new network inside OpenNebula for that using the chosen
subnet and the same bridge, br0.

I don't know if you have any kind of security policies in place but be
careful that in this way there is no Layer 2 separation and traffic
between the two subnets is visible with tcpdump or other sniffers.

The second approach I can think about is to have the frontend configured
with the first IP Address from the new subnet on br0 and define a new
network inside OpenNebula like the above.

I don't know if this would work though.The NAT must be done for 10.100.0/24 over
192.168.1.X (the IP Address of frontend from 192.168.1/24 subnet). What
I don't know is if iptables can MASQUERADE subnets on the same
interface. Never tried it, it might work.

Another approach that come to mind is to use the Virtual Router and
define a new subnet on the same br0 bridge. The Virtual Router would
have an interface connected to 192.168.1/24 network and one in the
10.100.0/24 one. Setup it up to have the first IP Address from the
10.100.0/24 network so it is the default gateway.

The same applies, traffic over L2 is not separated in anyway.

One more idea :-) would be to use Open vSwitch and GRE tunnels between
the nodes. In this way you can use VLANs and transport over GRE between
nodes. You can also setup IPSec encrypted GRE tunnels if you want
security. It might be overkill but again it depends on your
requirements.

Another working setup I have done is to use tinc VPN [1] between nodes
in switch mode and connect it to the Open vSwitch from each host as a
port. This way traffic that travels between nodes is fully encrypted and
you can use the same L2 network in a secure fashion.

But maybe the best approach would be to have a second network card,
eth1, in each node. Connect that second card in an Open vSwitch and use
VLANs with the frontend being the router, or any other node for that
matter.

[1]: http://www.tinc-vpn.org/

Good Will,
Valentin

On Thu, Oct 03, 2013 at 09:18:41AM +0800, M Fazli A Jalaluddin wrote:
 Hello Valentin,
 
 My setup for OpenNebula is 1 Front-end and several KVM nodes. The front-end
 and nodes are using IP address 192.168.1.xxx and are able to connect to the
 internet.
 
 The current networking setup for the VM is using dummy and bridge, br0.
 
 So, for the VM able to access to the internet, is by assigning them
 192.168.1.xxx IP addresses.
 
 If I have many VMs, IP address 192.168.1.xxx will be depleted.
 
 Hence, I need to make a new private network such as, 10.0.1.xxx which will
 map to only a single 192.168.1.xxx, e.g 192.168.1.5.
 
 Thank you.
 
 Regards,
 Fazli
 
 
 On Wed, Oct 2, 2013 at 7:21 PM, Valentin Bud valentin@gmail.com wrote:
 
  Hello Fazli,
 
  The Virtual Router documentation [1] is definitely a good place to start.
 
 
  On Wed, Oct 2, 2013 at 1:57 PM, M Fazli A Jalaluddin 
  fazli.jalalud...@gmail.com wrote:
 
  Hi,
 
  Is there any tutorial on how to use the VirtualRouter?
 
  I have download the image from Marketplace and Deploy a VM out of it.
 
  Then what should I do?
 
  My concern is that the Multiple VM will be able to be assigned a private
  IP address (at the same time connect to the internet) while the KVM host is
  using public IP address.
 
 
  I don't really understand your concern. Could you be more specific?
 
  Yes, every VM will get a private IP address from the Router in case you
  connect it to the private
  network. If you connect the VM to the public network too you'd have to
  setup the IP address on the VM.
  If context package is installed in the VM it'll autoconfigure the public
  IP also.
 
  [1]: http://opennebula.org/documentation:rel4.2:router
 
  Good Will,
 
 
 
  Thank you
 
  On Wed, Oct 2, 2013 at 4:26 PM, Carlos Martín Sánchez 
  cmar...@opennebula.org wrote:
 
  Hi,
 
  On Wed, Oct 2, 2013 at 6:56 AM, M Fazli A Jalaluddin 
  fazli.jalalud...@gmail.com wrote:
 
  Hi,
 
  May I know if the Virtual Router provide NAT?
 
 
  Yes, look for the Full Router section in the documentation:
  http://opennebula.org/documentation:rel4.2:router
 
  PS: Please reply also to the mailing list
 
  Regards.
  --
  Carlos Martín, MSc
  Project Engineer
  OpenNebula - Flexible Enterprise Cloud Made Simple
  www.OpenNebula.org | cmar...@opennebula.org | 
  @OpenNebulahttp://twitter.com/opennebulacmar...@opennebula.org
 
 
  On Wed, Oct 2, 2013 at 

Re: [one-users] DNS search domains and rc.local not being read in VMs

2013-10-03 Thread Valentin Bud
Hello Evgeniy,


On Thu, Oct 3, 2013 at 9:15 AM, Evgeniy Suvorov eesuvo...@gmail.com wrote:

 Hello,
 `And define the $searchdomain variable in your network template`
 Where i can find network template?


If you've already created your vnets inside OpenNebula you can update the
variables using

$ onevnet update vnet_id

Define SEARCHDOMAIN=domain.tld and save it.

Use SEARCHDOMAIN inside CONTEXT like SEARCHDOMAIN=$NIC[SEARCHDOMAIN,
NETWORK=\private\]

private being the name of your network the VM template uses.

Afterwards modify or add new scripts to the contextualization packages.
Enjoy your work.

Good Will,




 2013/9/26 Campbell, Bill bcampb...@axcess-financial.com

 There should be DNS scripts in the contextualization packages that provide
 some assignment of DNS.  You should be able to slightly modify a script to
 pick up that variable you define, i.e.

 echo search $searchdomain  /etc/resolv.conf

 And define the $searchdomain variable in your network template.


 Be advised, I only see this script on the Ubuntu context packages (not
 RHEL/Cent, but could be added either way)
 --
 *From: *jerry steele jerry.ste...@cgg.com
 *To: *users@lists.opennebula.org
 *Sent: *Thursday, September 26, 2013 6:42:23 AM
 *Subject: *[one-users] DNS search domains and rc.local not being read in
 VMs


  Hello,



 I posted a few days ago about network configuration seemingly not being
 picked up properly – I think that was mainly due to my contextualisation
 packages being for the wrong version, and so the context CDROM device
 wasn’t being mounted.



 So now I have a separate,, but related issue.



 I need to be able to have DNS search domains in my /etc/resolv.conf. I
 tried to achieve this by adding a SEARCH attribute to the virtual network
 template, but it doesn’t seem to be picked up (looking in the context
 scripts, there’s nothing there to pick it up, so that’s kind of expected).
 I then tried to add it in /etc/rc.local, along with some lines to correctly
 set the hostname based on IP,  but this doesn’t appear to be being picked
 up either.



 Could anyone tell me what might be going wrong here? Could it be the case
 that rc.local is not being run for some reason at boot? When I test the VM
 outside ONE, rc.local runs fine…



 Any help greatly appreciated.



 Thanks

 *__** **__** **__***

 * *

 * *

 *Jerry Steele*

 IT Support Specialist

 Subsurface Imaging



 CGGVeritas Services (UK) Ltd

 Crompton Way

 Crawley

 W Sussex

 RH10 9QN



 T  +44 (0)1293 683264 (Internal: 233264)

 M +44 (0)7920 237105



 www.cgg.com



 *This email and any accompanying attachments are confidential. The
 information
 is intended solely for the use of the individual to whom it is addressed.
 Any review,
 disclosure, copying, distribution, or use of the email by others is
 strictly prohibited.*



 *This email and any accompanying attachments are confidential. If you
 received this email by mistake, please delete
 it from your system. Any review, disclosure, copying, distribution, or
 use of the email by others is strictly prohibited.*

 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


 *NOTICE: Protect the information in this message in accordance with the
 company's security policies. If you received this message in error,
 immediately notify the sender and destroy all copies.*


 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




 --
 Regards, Evgeniy.

 Tel.: +79060665574
 ICQ: 380264507

 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




-- 
Valentin Bud
http://databus.pro | valen...@databus.pro
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] [one-user] migrate fail in some host

2013-10-03 Thread ????@'4??
Hi,
there is a fail VM log of the action that migrate from host2 to host1 ,it can't 
touch off the mv command to mv the vm image. So,when I migrate, it delete the 
source file and do nothing to migrate the image to the destination ,telling me 
No such file or directory in the.
40:21 2013 [LCM][I]: New VM state is PROLOG_MIGRATE Thu Oct  3 15:40:21 2013 
[TM][I]: ExitCode: 0 Thu Oct  3 15:40:23 2013 [TM][I]: mv: Moving 
host2:/var/lib/one/datastores/0/80 to host1:/var/lib/one/datastores/0/80 Thu 
Oct  3 15:40:23 2013 [TM][I]: ExitCode: 0 Thu Oct  3 15:40:23 2013 [LCM][I]: 
New VM state is BOOT Thu Oct  3 15:40:24 2013 [VMM][I]: ExitCode: 0 Thu Oct  3 
15:40:24 2013 [VMM][I]: Successfully execute network driver operation: pre. Thu 
Oct  3 15:40:25 2013 [VMM][I]: Command execution fail: 
/var/tmp/one/vmm/kvm/restore /var/lib/one//datastores/0/80/checkpoint host1 80 
host1 Thu Oct  3 15:40:25 2013 [VMM][E]: restore: Command virsh --connect 
qemu:///system restore /var/lib/one//datastores/0/80/checkpoint failed: error: 
Failed to restore domain from /var/lib/one//datastores/0/80/checkpoint Thu Oct  
3 15:40:25 2013 [VMM][I]: error: Unable to allow access for disk path 
/var/lib/one//datastores/0/80/disk.0: No such file or directory Thu Oct  3 
15:40:25 2013 [VMM][E]: Could not restore from 
/var/lib/one//datastores/0/80/checkpoint Thu Oct  3 15:40:25 2013 [VMM][I]: 
ExitCode: 1But migrate from host1 to host2 all is OKThu Oct  3 15:35:44 2013 
[LCM][I]: New VM state is PROLOG_MIGRATE Thu Oct  3 15:35:44 2013 [TM][I]: 
ExitCode: 0 Thu Oct  3 15:35:47 2013 [TM][I]: mv: Moving 
host1:/var/lib/one/datastores/0/81 to host2:/var/lib/one/datastores/0/81 Thu 
Oct  3 15:35:47 2013 [TM][I]: ExitCode: 0 Thu Oct  3 15:35:47 2013 [LCM][I]: 
New VM state is BOOT Thu Oct  3 15:35:47 2013 [VMM][I]: ExitCode: 0 Thu Oct  3 
15:35:47 2013 [VMM][I]: Successfully execute network driver operation: pre. Thu 
Oct  3 15:35:49 2013 [VMM][I]: ExitCode: 0 Thu Oct  3 15:35:49 2013 [VMM][I]: 
Successfully execute virtualization driver operation: restore. Thu Oct  3 
15:35:49 2013 [VMM][I]: post: Executed sudo /usr/bin/ovs-vsctl set Port vnet3 
tag=302. Thu Oct  3 15:35:49 2013 [VMM][I]: post: Executed sudo 
/usr/bin/ovs-ofctl add-flow ovsbr0 
in_port=66,dl_src=02:00:c0:a8:b6:80,priority=4,actions=normal. Thu Oct  3 
15:35:49 2013 [VMM][I]: post: Executed sudo /usr/bin/ovs-ofctl add-flow ovsbr0 
in_port=66,priority=39000,actions=drop. Thu Oct  3 15:35:49 2013 [VMM][I]: 
post: Executed sudo /usr/bin/ovs-ofctl add-flow ovsbr0 
in_port=67,dl_src=02:00:63:09:09:7c,priority=4,actions=normal. Thu Oct  3 
15:35:49 2013 [VMM][I]: post: Executed sudo /usr/bin/ovs-ofctl add-flow ovsbr0 
in_port=67,priority=39000,actions=drop. Thu Oct  3 15:35:49 2013 [VMM][I]: 
ExitCode: 0 Thu Oct  3 15:35:49 2013 [VMM][I]: Successfully execute network 
driver operation: post. Thu Oct  3 15:35:49 2013 [LCM][I]: New VM state is 
RUNNINGI using the default system datastore ,and ssh mode,can you tell me where 
is the problem location.___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Re??[one-user] migrate fail in some host

2013-10-03 Thread ????@'4??
I found the problem,
Keep qemu.conf like this
dynamic_ownership = 0. To be able to use the images copied by OpenNebula, 
change also the user and group under which the libvirtd is run to ??oneadmin??:
$ grep -vE '^($|#)' /etc/libvirt/qemu.conf user = oneadmin group = oneadmin 
dynamic_ownership = 0
Thx Ruben S. Montero's mail.
Check KVM configuration to match 
http://opennebula.org/documentation:rel3.8:kvmg#configuration



--  --
??: @'4?? ;yangz...@qq.com;
: 2013??10??3??(??) 4:11
??: usersusers@lists.opennebula.org; 

: [one-user] migrate fail in some host



Hi,
there is a fail VM log of the action that migrate from host2 to host1 ,it can't 
touch off the mv command to mv the vm image. So,when I migrate, it delete the 
source file and do nothing to migrate the image to the destination ,telling me 
No such file or directory in the.
40:21 2013 [LCM][I]: New VM state is PROLOG_MIGRATE Thu Oct  3 15:40:21 2013 
[TM][I]: ExitCode: 0 Thu Oct  3 15:40:23 2013 [TM][I]: mv: Moving 
host2:/var/lib/one/datastores/0/80 to host1:/var/lib/one/datastores/0/80 Thu 
Oct  3 15:40:23 2013 [TM][I]: ExitCode: 0 Thu Oct  3 15:40:23 2013 [LCM][I]: 
New VM state is BOOT Thu Oct  3 15:40:24 2013 [VMM][I]: ExitCode: 0 Thu Oct  3 
15:40:24 2013 [VMM][I]: Successfully execute network driver operation: pre. Thu 
Oct  3 15:40:25 2013 [VMM][I]: Command execution fail: 
/var/tmp/one/vmm/kvm/restore /var/lib/one//datastores/0/80/checkpoint host1 80 
host1 Thu Oct  3 15:40:25 2013 [VMM][E]: restore: Command virsh --connect 
qemu:///system restore /var/lib/one//datastores/0/80/checkpoint failed: error: 
Failed to restore domain from /var/lib/one//datastores/0/80/checkpoint Thu Oct  
3 15:40:25 2013 [VMM][I]: error: Unable to allow access for disk path 
/var/lib/one//datastores/0/80/disk.0: No such file or directory Thu Oct  3 
15:40:25 2013 [VMM][E]: Could not restore from 
/var/lib/one//datastores/0/80/checkpoint Thu Oct  3 15:40:25 2013 [VMM][I]: 
ExitCode: 1But migrate from host1 to host2 all is OKThu Oct  3 15:35:44 2013 
[LCM][I]: New VM state is PROLOG_MIGRATE Thu Oct  3 15:35:44 2013 [TM][I]: 
ExitCode: 0 Thu Oct  3 15:35:47 2013 [TM][I]: mv: Moving 
host1:/var/lib/one/datastores/0/81 to host2:/var/lib/one/datastores/0/81 Thu 
Oct  3 15:35:47 2013 [TM][I]: ExitCode: 0 Thu Oct  3 15:35:47 2013 [LCM][I]: 
New VM state is BOOT Thu Oct  3 15:35:47 2013 [VMM][I]: ExitCode: 0 Thu Oct  3 
15:35:47 2013 [VMM][I]: Successfully execute network driver operation: pre. Thu 
Oct  3 15:35:49 2013 [VMM][I]: ExitCode: 0 Thu Oct  3 15:35:49 2013 [VMM][I]: 
Successfully execute virtualization driver operation: restore. Thu Oct  3 
15:35:49 2013 [VMM][I]: post: Executed sudo /usr/bin/ovs-vsctl set Port vnet3 
tag=302. Thu Oct  3 15:35:49 2013 [VMM][I]: post: Executed sudo 
/usr/bin/ovs-ofctl add-flow ovsbr0 
in_port=66,dl_src=02:00:c0:a8:b6:80,priority=4,actions=normal. Thu Oct  3 
15:35:49 2013 [VMM][I]: post: Executed sudo /usr/bin/ovs-ofctl add-flow ovsbr0 
in_port=66,priority=39000,actions=drop. Thu Oct  3 15:35:49 2013 [VMM][I]: 
post: Executed sudo /usr/bin/ovs-ofctl add-flow ovsbr0 
in_port=67,dl_src=02:00:63:09:09:7c,priority=4,actions=normal. Thu Oct  3 
15:35:49 2013 [VMM][I]: post: Executed sudo /usr/bin/ovs-ofctl add-flow ovsbr0 
in_port=67,priority=39000,actions=drop. Thu Oct  3 15:35:49 2013 [VMM][I]: 
ExitCode: 0 Thu Oct  3 15:35:49 2013 [VMM][I]: Successfully execute network 
driver operation: post. Thu Oct  3 15:35:49 2013 [LCM][I]: New VM state is 
RUNNINGI using the default system datastore ,and ssh mode,can you tell me where 
is the problem location.___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] OpenNebula and DHCP Server

2013-10-03 Thread M Fazli A Jalaluddin
Hi Valentin,

Your assumption is correct.

My method is to use OpenNebula Virtual Router by refer to this page [1] and
Openvswitch.

I have installed Openvswitch in the host and I was able to deploy VM in
isolated network.

I try to deploy the VirtualRouter in a virtual network.

My problem is, I cannot ping it and cannot SSH into it.

From the documentation, I understand that the VirtualRouter needs to be
deploy as a VM in a specific virtual network and it will act as the DHCP
for the VMs in the same virtual network.
I also have included the example context in the VirtualRouter template.

My VirtualRouter template:

NIC=[NETWORK_ID=0]
NIC=[NETWORK_ID=9,IP=10.0.10.1]
INPUT=[BUS=usb,TYPE=tablet]
MEMORY=512
OS=[ARCH=x86_64,BOOT=hd]
GRAPHICS=[LISTEN=0.0.0.0,TYPE=SPICE]
DISK=[IMAGE_ID=24]
CPU=0.5
CONTEXT=[TARGET=hdb,NETWORK=YES,FORWARDING=8080:10.0.10.2:80
10.0.10.2:22,DHCP=YES,PRIVNET=$NETWORK[TEMPLATE, NETWORK=\ovs
.10\],TEMPLATE=TEMPLATE,SSH_PUBLIC_KEY=ssh-rsa
B3NzaC1yc2EDAQABAAABAQCk+MN96iAn4uXRieJqyJG7WY32zW0LTXJBdISdjDLlp8QgFrxOdi9Aw2+eu+QSbVHwBsqOTimpOuzknisOhD4RPCTCT7G2/xaEUxWg0AB3ySrMZC3Dv5AgBy0CikFk50/CbwBtMjj2pRINm0axfP+cUT/VBhJRAiwVe2wsIOL/t2PGOy0O8Q2zjG1XfCVZPCYPOxj9Jk0y8DoMHp0ILA6gM7hGN4CKAQiXnbjv8WD9uFpRr7eruXQUdMuPn2wnyDMcCnzUEMtPUoPIy6gyAer3biRyEQkAXNJ+R1WXvX6Ah848MTyoICoA7KKIm9e3xe/SXMJxxOPHZLWSJSIRmhcd
hpc1@hpc-workstation1,PUBNET=$NETWORK[TEMPLATE, NETWORK=\Virtual Network
.113\],DNS=8.8.8.8 8.8.4.4]

May I know how to actually use the VirtualRouter?

[1] http://opennebula.org/documentation:rel4.2:router



On Thu, Oct 3, 2013 at 3:56 PM, Valentin Bud valentin@gmail.com wrote:

 Hello Fazli,

 I will make some assumptions about your infrastructure and provide
 possible approach(es).

 * Your KVM nodes have a single Ethernet interface, eth0, connected in a
   switch and a router used as the default gateway for the 192.168.1/24
   network,

 * Also the frontend is connected via the same switch with the rest of
   the nodes,

 * You have a br0 bridge with eth0 connected to it on each node and also
   the frontend,

 * Your frontend is also a node.

 If you have access to the router the simplest way would be to add an IP
 Address alias on the router interface as the default gateway for the new
 network.

 Configure a new network inside OpenNebula for that using the chosen
 subnet and the same bridge, br0.

 I don't know if you have any kind of security policies in place but be
 careful that in this way there is no Layer 2 separation and traffic
 between the two subnets is visible with tcpdump or other sniffers.

 The second approach I can think about is to have the frontend configured
 with the first IP Address from the new subnet on br0 and define a new
 network inside OpenNebula like the above.

 I don't know if this would work though.The NAT must be done for
 10.100.0/24 over
 192.168.1.X (the IP Address of frontend from 192.168.1/24 subnet). What
 I don't know is if iptables can MASQUERADE subnets on the same
 interface. Never tried it, it might work.

 Another approach that come to mind is to use the Virtual Router and
 define a new subnet on the same br0 bridge. The Virtual Router would
 have an interface connected to 192.168.1/24 network and one in the
 10.100.0/24 one. Setup it up to have the first IP Address from the
 10.100.0/24 network so it is the default gateway.

 The same applies, traffic over L2 is not separated in anyway.

 One more idea :-) would be to use Open vSwitch and GRE tunnels between
 the nodes. In this way you can use VLANs and transport over GRE between
 nodes. You can also setup IPSec encrypted GRE tunnels if you want
 security. It might be overkill but again it depends on your
 requirements.

 Another working setup I have done is to use tinc VPN [1] between nodes
 in switch mode and connect it to the Open vSwitch from each host as a
 port. This way traffic that travels between nodes is fully encrypted and
 you can use the same L2 network in a secure fashion.

 But maybe the best approach would be to have a second network card,
 eth1, in each node. Connect that second card in an Open vSwitch and use
 VLANs with the frontend being the router, or any other node for that
 matter.

 [1]: http://www.tinc-vpn.org/

 Good Will,
 Valentin

 On Thu, Oct 03, 2013 at 09:18:41AM +0800, M Fazli A Jalaluddin wrote:
  Hello Valentin,
 
  My setup for OpenNebula is 1 Front-end and several KVM nodes. The
 front-end
  and nodes are using IP address 192.168.1.xxx and are able to connect to
 the
  internet.
 
  The current networking setup for the VM is using dummy and bridge, br0.
 
  So, for the VM able to access to the internet, is by assigning them
  192.168.1.xxx IP addresses.
 
  If I have many VMs, IP address 192.168.1.xxx will be depleted.
 
  Hence, I need to make a new private network such as, 10.0.1.xxx which
 will
  map to only a single 192.168.1.xxx, e.g 192.168.1.5.
 
  Thank you.
 
  Regards,
  Fazli
 
 
  On Wed, Oct 2, 2013 at 7:21 PM, Valentin Bud 

Re: [one-users] ssh password less login not function

2013-10-03 Thread Amier Anis
HI team,

once opennebula-common create oneadmin, Is there any issue if i reset the
oneadmin password?

Is there any require password-less from workers to management node?



On Wed, Oct 2, 2013 at 5:02 PM, Amier Anis myma...@gmail.com wrote:

 I don't think that selinux is the issue as I can ssh with password-less
 without issue if no opennebula installed.
 I also has tried using setenforce 0 and still have same issue. (i try
 diffrent machine)

 [oneadmin@mnode lib]$ /usr/sbin/sestatus
 SELinux status: disabled


 I has tried both let the opennebula-common created the user or i manually
 created. same issue.
 This is how I install opennebula and the component:-

 yum -y install opennebula-server opennebula-sunstone opennebula-ozones
 opennebula-gate opennebula-flow opennebula-node-kvm


 Yes, i have all the file in the ~/.ssh

 [oneadmin@mnode .ssh]$ ls -l
 total 16
 -rw--- 1 oneadmin oneadmin  406 Oct  2 10:19 authorized_keys
 -rw--- 1 oneadmin oneadmin   61 Oct  2 03:08 config
 -rw--- 1 oneadmin oneadmin 1675 Oct  2 10:19 id_rsa
 -rw--- 1 oneadmin oneadmin  406 Oct  2 10:19 id_rsa.pub

 ​I try to ssh -v node01 ... this error come out. however, this error not
 appear at the first place.

 -bash-4.1$ ssh -v 10.86.3.101

 OpenSSH_5.3p1, OpenSSL 1.0.0-fips 29 Mar 2010

 debug1: Reading configuration data /var/lib/one/.ssh/config

 debug1: Reading configuration data /etc/ssh/ssh_config

 debug1: Applying options for *

 debug1: Connecting to 10.86.3.101 [10.86.3.101] port 22.

 debug1: Connection established.

 debug1: identity file /var/lib/one/.ssh/identity type -1

 debug1: identity file /var/lib/one/.ssh/id_rsa type 1

 debug1: identity file /var/lib/one/.ssh/id_dsa type -1

 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3

 debug1: match: OpenSSH_5.3 pat OpenSSH*

 debug1: Enabling compatibility mode for protocol 2.0

 debug1: Local version string SSH-2.0-OpenSSH_5.3

 debug1: SSH2_MSG_KEXINIT sent

 debug1: SSH2_MSG_KEXINIT received

 debug1: kex: server-client aes128-ctr hmac-md5 none

 debug1: kex: client-server aes128-ctr hmac-md5 none

 debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(102410248192) sent

 debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP

 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent

 debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY

 debug1: Host '10.86.3.101' is known and matches the RSA host key.

 debug1: Found key in /var/lib/one/.ssh/known_hosts:1

 debug1: ssh_rsa_verify: signature correct

 debug1: SSH2_MSG_NEWKEYS sent

 debug1: expecting SSH2_MSG_NEWKEYS

 debug1: SSH2_MSG_NEWKEYS received

 debug1: SSH2_MSG_SERVICE_REQUEST sent

 debug1: SSH2_MSG_SERVICE_ACCEPT received

 debug1: Authentications that can continue:
 publickey,gssapi-keyex,gssapi-with-mic,password

 debug1: Next authentication method: gssapi-keyex

 debug1: No valid Key exchange context

 debug1: Next authentication method: gssapi-with-mic

 debug1: Unspecified GSS failure.  Minor code may provide more information

 Bad format in credentials cache

 debug1: Unspecified GSS failure.  Minor code may provide more information

 Bad format in credentials cache

 debug1: Unspecified GSS failure.  Minor code may provide more information

 debug1: Unspecified GSS failure.  Minor code may provide more information

 Bad format in credentials cache

 debug1: Next authentication method: publickey

 debug1: Trying private key: /var/lib/one/.ssh/identity

 debug1: Offering public key: /var/lib/one/.ssh/id_rsa

 debug1: Authentications that can continue:
 publickey,gssapi-keyex,gssapi-with-mic,password

 debug1: Trying private key: /var/lib/one/.ssh/id_dsa

 debug1: Next authentication method: password​


 Which is better I export /var/lib/one to every workers node or manually
 export to each workers?

 Thanks you.

 Regards  Best Wishes,


 *.: Amier Anis :.*
 Mobile: +6012-260-0819
 On Wed, Oct 2, 2013 at 3:40 PM, Valentin Bud valentin@gmail.comwrote:

 Hello Amier,


 On Wed, Oct 2, 2013 at 10:27 AM, Amier Anis myma...@gmail.com wrote:

 Hi valentin,

 Yes, I'm using packaging from opennebula repo and no error during
 install either i created the oneadmin first before install or automatic
 created by the installer.

 yum -y install opennebula-server opennebula-sunstone opennebula-ozones
 opennebula-gate opennebula-flow opennebula-node-kvm


 The opennebula-common package provides the user oneadmin so no need to
 create it manually. The opennebula-common is required by
 opennebula-server so no need to install it manually.



 I also has remove selinux from the system.
 ​
 ​
 ​

 yum -y remove selinux-policy


 Have you rebooted you system afterwards?



 Yes, I already configure
 ~/.ssh/config

 [oneadmin@mnode]$ vi ~/.ssh/config
  Host *
 StrictHostKeyChecking no
 UserKnownHostsFile /dev/null
   ControlMaster auto
 ControlPath /tmp/%r@%h:%p


 This looks OK.

 I suggest you remove the packages yum -y remove opennebula-\* and remove
 the oneadmin user, rm -rf 

[one-users] Network addressing and IP recognition in ONE

2013-10-03 Thread Andreas Calvo Gómez

Hello all,
Network addressing under ONE (with any of the drivers) seems target to 
isolated/dedicated networks, where one can assume all IP address can be 
guessed by the MAC.
However, when mixing a working network segment with ONE, you have to 
sacrifice something to get it working.

Currently, there are 3 options:
- Virtual Router
- Virtual Network with fixed range
- Virtual Network without range

None of the mentioned options will fit with an external network services 
server (DHCP, DNS) and let ONE know the right address of a VM.


Would it be possible to, given a VM and all context parameters, push the 
IP information into ONE?
In a more practical approach: during the startup of the OS, get a valid 
IP address, and during the context script, push that information to ONE.
Seems a little bit more dynamic, since nothing is hardcoded either in 
Virtual Networks or in the network service (DHCP), plus it gives the 
ability to control from outside and integrated with other services.


Thanks!!
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] OpenNebula and DHCP Server

2013-10-03 Thread Valentin Bud
Hi Fazli,


On Thu, Oct 3, 2013 at 12:22 PM, M Fazli A Jalaluddin 
fazli.jalalud...@gmail.com wrote:

 Hi Valentin,

 Your assumption is correct.

 My method is to use OpenNebula Virtual Router by refer to this page [1]
 and Openvswitch.

 I have installed Openvswitch in the host and I was able to deploy VM in
 isolated network.

 I try to deploy the VirtualRouter in a virtual network.


In two virtual networks in fact, in the PUBNET which should be the 192.168
network from br0 on the
nodes and frontend and PRIVNET in the Open vSwitch network.



 My problem is, I cannot ping it and cannot SSH into it.


You should be able to connect to PUBNET's virtual IP Address from within
the 192.168 network.

Or you could add an internal port to Open vSwitch bridge and try to connect
to PRIVNET's virtual
IP Address of the VR.



 From the documentation, I understand that the VirtualRouter needs to be
 deploy as a VM in a specific virtual network and it will act as the DHCP
 for the VMs in the same virtual network.
 I also have included the example context in the VirtualRouter template.

 My VirtualRouter template:

 NIC=[NETWORK_ID=0]
 NIC=[NETWORK_ID=9,IP=10.0.10.1]
 INPUT=[BUS=usb,TYPE=tablet]
 MEMORY=512
 OS=[ARCH=x86_64,BOOT=hd]
 GRAPHICS=[LISTEN=0.0.0.0,TYPE=SPICE]
 DISK=[IMAGE_ID=24]
 CPU=0.5
 CONTEXT=[TARGET=hdb,NETWORK=YES,FORWARDING=8080:10.0.10.2:80
 10.0.10.2:22,DHCP=YES,PRIVNET=$NETWORK[TEMPLATE, NETWORK=\ovs
 .10\],TEMPLATE=TEMPLATE,SSH_PUBLIC_KEY=ssh-rsa
 B3NzaC1yc2EDAQABAAABAQCk+MN96iAn4uXRieJqyJG7WY32zW0LTXJBdISdjDLlp8QgFrxOdi9Aw2+eu+QSbVHwBsqOTimpOuzknisOhD4RPCTCT7G2/xaEUxWg0AB3ySrMZC3Dv5AgBy0CikFk50/CbwBtMjj2pRINm0axfP+cUT/VBhJRAiwVe2wsIOL/t2PGOy0O8Q2zjG1XfCVZPCYPOxj9Jk0y8DoMHp0ILA6gM7hGN4CKAQiXnbjv8WD9uFpRr7eruXQUdMuPn2wnyDMcCnzUEMtPUoPIy6gyAer3biRyEQkAXNJ+R1WXvX6Ah848MTyoICoA7KKIm9e3xe/SXMJxxOPHZLWSJSIRmhcd
 hpc1@hpc-workstation1,PUBNET=$NETWORK[TEMPLATE, NETWORK=\Virtual
 Network .113\],DNS=8.8.8.8 8.8.4.4]


This looks good and should work.



 May I know how to actually use the VirtualRouter?


 [1] http://opennebula.org/documentation:rel4.2:router



Good Will,



 On Thu, Oct 3, 2013 at 3:56 PM, Valentin Bud valentin@gmail.comwrote:

 Hello Fazli,

 I will make some assumptions about your infrastructure and provide
 possible approach(es).

 * Your KVM nodes have a single Ethernet interface, eth0, connected in a
   switch and a router used as the default gateway for the 192.168.1/24
   network,

 * Also the frontend is connected via the same switch with the rest of
   the nodes,

 * You have a br0 bridge with eth0 connected to it on each node and also
   the frontend,

 * Your frontend is also a node.

 If you have access to the router the simplest way would be to add an IP
 Address alias on the router interface as the default gateway for the new
 network.

 Configure a new network inside OpenNebula for that using the chosen
 subnet and the same bridge, br0.

 I don't know if you have any kind of security policies in place but be
 careful that in this way there is no Layer 2 separation and traffic
 between the two subnets is visible with tcpdump or other sniffers.

 The second approach I can think about is to have the frontend configured
 with the first IP Address from the new subnet on br0 and define a new
 network inside OpenNebula like the above.

 I don't know if this would work though.The NAT must be done for
 10.100.0/24 over
 192.168.1.X (the IP Address of frontend from 192.168.1/24 subnet). What
 I don't know is if iptables can MASQUERADE subnets on the same
 interface. Never tried it, it might work.

 Another approach that come to mind is to use the Virtual Router and
 define a new subnet on the same br0 bridge. The Virtual Router would
 have an interface connected to 192.168.1/24 network and one in the
 10.100.0/24 one. Setup it up to have the first IP Address from the
 10.100.0/24 network so it is the default gateway.

 The same applies, traffic over L2 is not separated in anyway.

 One more idea :-) would be to use Open vSwitch and GRE tunnels between
 the nodes. In this way you can use VLANs and transport over GRE between
 nodes. You can also setup IPSec encrypted GRE tunnels if you want
 security. It might be overkill but again it depends on your
 requirements.

 Another working setup I have done is to use tinc VPN [1] between nodes
 in switch mode and connect it to the Open vSwitch from each host as a
 port. This way traffic that travels between nodes is fully encrypted and
 you can use the same L2 network in a secure fashion.

 But maybe the best approach would be to have a second network card,
 eth1, in each node. Connect that second card in an Open vSwitch and use
 VLANs with the frontend being the router, or any other node for that
 matter.

 [1]: http://www.tinc-vpn.org/

 Good Will,
 Valentin

 On Thu, Oct 03, 2013 at 09:18:41AM +0800, M Fazli A Jalaluddin wrote:
  Hello Valentin,
 
  My setup for OpenNebula is 1 Front-end and several KVM nodes. The
 front-end
 

Re: [one-users] OpenNebula and DHCP Server

2013-10-03 Thread Ionut Popovici

Did you guys activate ip_forward the packets to be routed tru networks ?
On 10/3/2013 2:44 PM, Valentin Bud wrote:

Hi Fazli,


On Thu, Oct 3, 2013 at 12:22 PM, M Fazli A Jalaluddin 
fazli.jalalud...@gmail.com mailto:fazli.jalalud...@gmail.com wrote:


Hi Valentin,

Your assumption is correct.

My method is to use OpenNebula Virtual Router by refer to this
page [1] and Openvswitch.

I have installed Openvswitch in the host and I was able to deploy
VM in isolated network.

I try to deploy the VirtualRouter in a virtual network.


In two virtual networks in fact, in the PUBNET which should be the 
192.168 network from br0 on the

nodes and frontend and PRIVNET in the Open vSwitch network.


My problem is, I cannot ping it and cannot SSH into it.


You should be able to connect to PUBNET's virtual IP Address from 
within the 192.168 network.


Or you could add an internal port to Open vSwitch bridge and try to 
connect to PRIVNET's virtual

IP Address of the VR.


From the documentation, I understand that the VirtualRouter needs
to be deploy as a VM in a specific virtual network and it will act
as the DHCP for the VMs in the same virtual network.
I also have included the example context in the VirtualRouter
template.

My VirtualRouter template:

NIC=[NETWORK_ID=0]
NIC=[NETWORK_ID=9,IP=10.0.10.1]
INPUT=[BUS=usb,TYPE=tablet]
MEMORY=512
OS=[ARCH=x86_64,BOOT=hd]
GRAPHICS=[LISTEN=0.0.0.0,TYPE=SPICE]
DISK=[IMAGE_ID=24]
CPU=0.5
CONTEXT=[TARGET=hdb,NETWORK=YES,FORWARDING=8080:10.0.10.2:80
http://10.0.10.2:80 10.0.10.2:22
http://10.0.10.2:22,DHCP=YES,PRIVNET=$NETWORK[TEMPLATE,
NETWORK=\ovs .10\],TEMPLATE=TEMPLATE,SSH_PUBLIC_KEY=ssh-rsa

B3NzaC1yc2EDAQABAAABAQCk+MN96iAn4uXRieJqyJG7WY32zW0LTXJBdISdjDLlp8QgFrxOdi9Aw2+eu+QSbVHwBsqOTimpOuzknisOhD4RPCTCT7G2/xaEUxWg0AB3ySrMZC3Dv5AgBy0CikFk50/CbwBtMjj2pRINm0axfP+cUT/VBhJRAiwVe2wsIOL/t2PGOy0O8Q2zjG1XfCVZPCYPOxj9Jk0y8DoMHp0ILA6gM7hGN4CKAQiXnbjv8WD9uFpRr7eruXQUdMuPn2wnyDMcCnzUEMtPUoPIy6gyAer3biRyEQkAXNJ+R1WXvX6Ah848MTyoICoA7KKIm9e3xe/SXMJxxOPHZLWSJSIRmhcd
hpc1@hpc-workstation1,PUBNET=$NETWORK[TEMPLATE,
NETWORK=\Virtual Network .113\],DNS=8.8.8.8 8.8.4.4]


This looks good and should work.


May I know how to actually use the VirtualRouter?


[1] http://opennebula.org/documentation:rel4.2:router



Good Will,


On Thu, Oct 3, 2013 at 3:56 PM, Valentin Bud
valentin@gmail.com mailto:valentin@gmail.com wrote:

Hello Fazli,

I will make some assumptions about your infrastructure and provide
possible approach(es).

* Your KVM nodes have a single Ethernet interface, eth0,
connected in a
  switch and a router used as the default gateway for the
192.168.1/24
  network,

* Also the frontend is connected via the same switch with the
rest of
  the nodes,

* You have a br0 bridge with eth0 connected to it on each node
and also
  the frontend,

* Your frontend is also a node.

If you have access to the router the simplest way would be to
add an IP
Address alias on the router interface as the default gateway
for the new
network.

Configure a new network inside OpenNebula for that using the
chosen
subnet and the same bridge, br0.

I don't know if you have any kind of security policies in
place but be
careful that in this way there is no Layer 2 separation and
traffic
between the two subnets is visible with tcpdump or other sniffers.

The second approach I can think about is to have the frontend
configured
with the first IP Address from the new subnet on br0 and
define a new
network inside OpenNebula like the above.

I don't know if this would work though.The NAT must be done
for 10.100.0/24 over
192.168.1.X (the IP Address of frontend from 192.168.1/24
subnet). What
I don't know is if iptables can MASQUERADE subnets on the same
interface. Never tried it, it might work.

Another approach that come to mind is to use the Virtual
Router and
define a new subnet on the same br0 bridge. The Virtual Router
would
have an interface connected to 192.168.1/24 network and one in the
10.100.0/24 one. Setup it up to have the first IP Address from the
10.100.0/24 network so it is the default gateway.

The same applies, traffic over L2 is not separated in anyway.

One more idea :-) would be to use Open vSwitch and GRE tunnels
between
the nodes. In this way you can use VLANs and transport over
GRE between
nodes. You can also setup IPSec encrypted GRE tunnels if you want
security. It might be overkill but again it depends on your

[one-users] Shutting down a VM from within the VM

2013-10-03 Thread Parag Mhashilkar
Hi,

Does OpenNebula EC2 interface support shutting down a VM from with in the VM 
itself and have the scheduler recognize that VM has been stopped/shutdown? How 
do we enable this feature? At Fermi, we have OpenNebula v3.2 and when the VM is 
shutdown it stays in the UNKNOWN state. Can OpenNebula get this ACPI shutdown 
info from virsh and handle the situation more gracefully rather than putting 
the VM in UKNOWN state?

Here is an example why I think something like this is useful:

When VMs are launched to perform certain tasks (classical equivalent of batch 
nodes), only the processes running in the VM know when the task is done and can 
shutdown the VM freeing up the resources. Running VM past the task life is 
wasted resources and controlling the lifetime of VM from outside is not always 
possible.

In case of AWS, it supports following which is very good feature to have when 
controlling the VMs in above scenario.
ec2-run-instaces --instance-initiated-shutdown-behavior stop|terminate

How do we achieve this with Opennebula?

Thanks  Regards
+==
| Parag Mhashilkar
| Fermi National Accelerator Laboratory, MS 120
| Wilson  Kirk Road, Batavia, IL - 60510
|--
| Phone: 1 (630) 840-6530 Fax: 1 (630) 840-2783
|--
| Wilson Hall, 806E (Nov 8, 2012 - To date)
| Wilson Hall, 867E (Nov 17, 2010 - Nov 7, 2012)
| Wilson Hall, 863E (Apr 24, 2007 - Nov 16, 2010)
| Wilson Hall, 856E (Mar 21, 2005 - Apr 23, 2007)
+==



smime.p7s
Description: S/MIME cryptographic signature
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Cannot create images using Sunstone self-service Cloud View 4.2

2013-10-03 Thread Gerry O'Brien

Hi,

I have enabled image creation in the cloud.yaml  view (see below). 
But while the green create button appears, if I press it all that 
happens is that I'm dumped into the dashboard page and get the message 
Error Cannot connect to OpenNebula Marketplace.


Our systems are behind a proxy but I have had to delete the 
http_proxy shell variable as otherwise sunstone can't talk to the oned 
daemon if it goes through the proxy. I've also tried configuring 
sunstone-server.conf to go directly to oned at 127.0.01 but that doesn't 
work either.


Any ideas on this? A cloud user not being able to create an image 
is a great limitation to my mind. My hunch is that it is connected to 
the failure to connect to the marketplace. I've also commented in and 
out the market place settings in sunstone-server.conf but this has made 
no difference.


Regards,
Gerry



   images-tab:
panel_tabs:
image_info_panel: true
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Owner
- 3 # Group
- 4 # Name
- 5 # Datastore
- 6 # Size
- 7 # Type
- 8 # Registration time
- 9 # Persistent
- 10# Status
- 11# #VMs
#- 12   # Target
actions:
Image.refresh: true
Image.create_dialog: true
Image.chown: false
Image.chgrp: false
Image.chmod: false
Image.enable: true
Image.disable: true
Image.persistent: true
Image.nonpersistent: true
Image.clone_dialog: true
Image.delete: true












--
Gerry O'Brien

Systems Manager
School of Computer Science and Statistics
Trinity College Dublin
Dublin 2
IRELAND

00 353 1 896 1341

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] ssh password less login not function

2013-10-03 Thread Amier Anis
hi teamany idea on this?  Sent from my BlackBerry z10. From: Amier AnisSent: Thursday, 3 October 2013 18:35To: Valentin BudCc: users@lists.opennebula.orgSubject: Re: [one-users] ssh password less login not functionHI team,once opennebula-common create oneadmin, Is there any issue if i reset the oneadmin password?
Is there any require password-less from workers to management node?
On Wed, Oct 2, 2013 at 5:02 PM, Amier Anis myma...@gmail.com wrote:
I don't think that selinux is the issue as I can ssh with password-less without issue if no opennebula installed.
I also has tried using"setenforce 0" and still have same issue. (i try diffrent machine)
[oneadmin@mnode lib]$ /usr/sbin/sestatus
SELinux status: disabledI has tried both let the opennebula-common created the user or i manually created. same issue.
This is how I install opennebula and the component:-yum -y install opennebula-server opennebula-sunstone opennebula-ozones opennebula-gate opennebula-flow opennebula-node-kvm

Yes, i have all the file in the ~/.ssh
[oneadmin@mnode .ssh]$ ls -ltotal 16
-rw--- 1 oneadmin oneadmin 406 Oct 2 10:19 authorized_keys
-rw--- 1 oneadmin oneadmin  61 Oct 2 03:08 config-rw--- 1 oneadmin oneadmin 1675 Oct 2 10:19 id_rsa
-rw--- 1 oneadmin oneadmin 406 Oct 2 10:19 id_rsa.pub
​I try to ssh -v node01 ... this error come out. however, this error not appear at the first place.

-bash-4.1$ ssh -v 10.86.3.101
OpenSSH_5.3p1, OpenSSL 1.0.0-fips 29 Mar 2010debug1: Reading configuration data /var/lib/one/.ssh/config
debug1: Reading configuration data /etc/ssh/ssh_config

debug1: Applying options for *
debug1: Connecting to 10.86.3.101 [10.86.3.101] port 22.debug1: Connection established.
debug1: identity file /var/lib/one/.ssh/identity type -1
debug1: identity file /var/lib/one/.ssh/id_rsa type 1

debug1: identity file /var/lib/one/.ssh/id_dsa type -1
debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3
debug1: match: OpenSSH_5.3 pat OpenSSH*debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_5.3

debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT receiveddebug1: kex: server-client aes128-ctr hmac-md5 none
debug1: kex: client-server aes128-ctr hmac-md5 none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(102410248192) sent

debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug1: SSH2_MSG_KEX_DH_GEX_INIT sentdebug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
debug1: Host '10.86.3.101' is known and matches the RSA host key.
debug1: Found key in /var/lib/one/.ssh/known_hosts:1

debug1: ssh_rsa_verify: signature correct
debug1: SSH2_MSG_NEWKEYS sentdebug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug1: SSH2_MSG_SERVICE_ACCEPT receiveddebug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password

debug1: Next authentication method: gssapi-keyex

debug1: No valid Key exchange context
debug1: Next authentication method: gssapi-with-micdebug1: Unspecified GSS failure. Minor code may provide more information

Bad format in credentials cache

debug1: Unspecified GSS failure. Minor code may provide more information
Bad format in credentials cache
debug1: Unspecified GSS failure. Minor code may provide more information
debug1: Unspecified GSS failure. Minor code may provide more informationBad format in credentials cache
debug1: Next authentication method: publickey

debug1: Trying private key: /var/lib/one/.ssh/identity
debug1: Offering public key: /var/lib/one/.ssh/id_rsa
debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password
debug1: Trying private key: /var/lib/one/.ssh/id_dsadebug1: Next authentication method: password​

Which is better I export /var/lib/one to every workers node or manually export to each workers?
Thanks you.Regards  Best Wishes,.: Amier Anis :.Mobile: +6012-260-0819On Wed, Oct 2, 2013 at 3:40 PM, Valentin Bud valentin@gmail.com wrote:

Hello Amier,

On Wed, Oct 2, 2013 at 10:27 AM, Amier Anis myma...@gmail.com wrote:

Hi valentin,



Yes, I'm using packaging from opennebula repo and no error during install either i created the oneadmin first before install or automatic created by the installer.

yum -y install opennebula-server opennebula-sunstone opennebula-ozones opennebula-gate opennebula-flow opennebula-node-kvm



The opennebula-common package provides the user oneadmin so no need to create it manually. The opennebula-common is 

[one-users] Supported Version of ESXi server?

2013-10-03 Thread Dmitri Chebotarov
Hi

Which version of ESXi server supported as host OS?
Is ESXi 4.1 u3 supported? Or is it ESXi 5.x only?

I've added ESXi 4.1u3 and oned cannot parse some of the output

Thu Oct  3 15:20:02 2013 [ONE][E]: Error parsing host information: syntax
error, unexpected VARIABLE, expecting COMMA or CBRACKET at line 12,
columns 234:235. Monitoring information:
MODELNAME=Intel(R) Xeon(R) CPU   X5560  @ 2.80GHz
CPUSPEED=2800
TOTALCPU=800
USEDCPU=1.21
FREECPU=798.79
TOTALMEMORY=37736812
USEDMEMORY=3280896
FREEMEMORY=34455916
NETRX=1159
NETTX=0
VM_POLL=YES
VM = [ID=-1,DEPLOY_ID=ESXi 4.1 u3,POLL=STATE=d]
VM = [ID=-1,DEPLOY_ID=Ubuntu Server 12.04 LTS Dom0,POLL=STATE=a
USEDCPU=0.36 USEDMEMORY=1936384 NETRX=185 NETTX=0]




--
Thank you,

Dmitri Chebotarov
VCL Sys Eng, Engineering  Architectural Support, TSD - Ent Servers 
Messaging
223 Aquia Building, Ffx, MSN: 1B5
Phone: (703) 993-6175 | Fax: (703) 993-3404

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Shutting down a VM from within the VM

2013-10-03 Thread Sharuzzaman Ahmat Raslan
Hi Parag,

I believe OpenNebula need to have human intervention to really determine
whether to remove or not the VM that it has deployed.

I also think that you can write a script that signal or call OpenNebula
command as soon as the task finish, to shutdown the VM. Or if direct
calling command not possible, maybe your application can write some status
in a database, and a script in OpenNebula read that status and make
decision from it.

Thanks.


On Thu, Oct 3, 2013 at 11:38 PM, Parag Mhashilkar pa...@fnal.gov wrote:

 Hi,

 Does OpenNebula EC2 interface support shutting down a VM from with in the
 VM itself and have the scheduler recognize that VM has been
 stopped/shutdown? How do we enable this feature? At Fermi, we have
 OpenNebula v3.2 and when the VM is shutdown it stays in the UNKNOWN state.
 Can OpenNebula get this ACPI shutdown info from virsh and handle the
 situation more gracefully rather than putting the VM in UKNOWN state?

 Here is an example why I think something like this is useful:

 When VMs are launched to perform certain tasks (classical equivalent of
 batch nodes), only the processes running in the VM know when the task is
 done and can shutdown the VM freeing up the resources. Running VM past the
 task life is wasted resources and controlling the lifetime of VM from
 outside is not always possible.

 In case of AWS, it supports following which is very good feature to have
 when controlling the VMs in above scenario.
 ec2-run-instaces --instance-initiated-shutdown-behavior stop|terminate

 How do we achieve this with Opennebula?

 Thanks  Regards
 +==
 | Parag Mhashilkar
 | Fermi National Accelerator Laboratory, MS 120
 | Wilson  Kirk Road, Batavia, IL - 60510
 |--
 | Phone: 1 (630) 840-6530 Fax: 1 (630) 840-2783
 |--
 | Wilson Hall, 806E (Nov 8, 2012 - To date)
 | Wilson Hall, 867E (Nov 17, 2010 - Nov 7, 2012)
 | Wilson Hall, 863E (Apr 24, 2007 - Nov 16, 2010)
 | Wilson Hall, 856E (Mar 21, 2005 - Apr 23, 2007)
 +==


 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




-- 
Sharuzzaman Ahmat Raslan
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] ssh password less login not function

2013-10-03 Thread Valentin Bud
Hi Amier,


On Thu, Oct 3, 2013 at 1:35 PM, Amier Anis myma...@gmail.com wrote:

 HI team,

 once opennebula-common create oneadmin, Is there any issue if i reset the
 oneadmin password?


The OS one or the OpenNebula one via oneuser? No problem in either case
just make
sure to update ~/.one/one_auth if you change oneadmin's ONE password.



 Is there any require password-less from workers to management node?


If management is also a node and you want live migration to work, yes, you
have to provide that.


Good Will,





 On Wed, Oct 2, 2013 at 5:02 PM, Amier Anis myma...@gmail.com wrote:

 I don't think that selinux is the issue as I can ssh with password-less
 without issue if no opennebula installed.
 I also has tried using setenforce 0 and still have same issue. (i try
 diffrent machine)

 [oneadmin@mnode lib]$ /usr/sbin/sestatus
 SELinux status: disabled


 I has tried both let the opennebula-common created the user or i manually
 created. same issue.
 This is how I install opennebula and the component:-

 yum -y install opennebula-server opennebula-sunstone opennebula-ozones
 opennebula-gate opennebula-flow opennebula-node-kvm


 Yes, i have all the file in the ~/.ssh

 [oneadmin@mnode .ssh]$ ls -l
 total 16
 -rw--- 1 oneadmin oneadmin  406 Oct  2 10:19 authorized_keys
 -rw--- 1 oneadmin oneadmin   61 Oct  2 03:08 config
 -rw--- 1 oneadmin oneadmin 1675 Oct  2 10:19 id_rsa
 -rw--- 1 oneadmin oneadmin  406 Oct  2 10:19 id_rsa.pub

 I try to ssh -v node01 ... this error come out. however, this error not
 appear at the first place.

 -bash-4.1$ ssh -v 10.86.3.101

 OpenSSH_5.3p1, OpenSSL 1.0.0-fips 29 Mar 2010

 debug1: Reading configuration data /var/lib/one/.ssh/config

 debug1: Reading configuration data /etc/ssh/ssh_config

 debug1: Applying options for *

 debug1: Connecting to 10.86.3.101 [10.86.3.101] port 22.

 debug1: Connection established.

 debug1: identity file /var/lib/one/.ssh/identity type -1

 debug1: identity file /var/lib/one/.ssh/id_rsa type 1

 debug1: identity file /var/lib/one/.ssh/id_dsa type -1

 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3

 debug1: match: OpenSSH_5.3 pat OpenSSH*

 debug1: Enabling compatibility mode for protocol 2.0

 debug1: Local version string SSH-2.0-OpenSSH_5.3

 debug1: SSH2_MSG_KEXINIT sent

 debug1: SSH2_MSG_KEXINIT received

 debug1: kex: server-client aes128-ctr hmac-md5 none

 debug1: kex: client-server aes128-ctr hmac-md5 none

 debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(102410248192) sent

 debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP

 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent

 debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY

 debug1: Host '10.86.3.101' is known and matches the RSA host key.

 debug1: Found key in /var/lib/one/.ssh/known_hosts:1

 debug1: ssh_rsa_verify: signature correct

 debug1: SSH2_MSG_NEWKEYS sent

 debug1: expecting SSH2_MSG_NEWKEYS

 debug1: SSH2_MSG_NEWKEYS received

 debug1: SSH2_MSG_SERVICE_REQUEST sent

 debug1: SSH2_MSG_SERVICE_ACCEPT received

 debug1: Authentications that can continue:
 publickey,gssapi-keyex,gssapi-with-mic,password

 debug1: Next authentication method: gssapi-keyex

 debug1: No valid Key exchange context

 debug1: Next authentication method: gssapi-with-mic

 debug1: Unspecified GSS failure.  Minor code may provide more information

 Bad format in credentials cache

 debug1: Unspecified GSS failure.  Minor code may provide more information

 Bad format in credentials cache

 debug1: Unspecified GSS failure.  Minor code may provide more information

 debug1: Unspecified GSS failure.  Minor code may provide more information

 Bad format in credentials cache

 debug1: Next authentication method: publickey

 debug1: Trying private key: /var/lib/one/.ssh/identity

 debug1: Offering public key: /var/lib/one/.ssh/id_rsa

 debug1: Authentications that can continue:
 publickey,gssapi-keyex,gssapi-with-mic,password

 debug1: Trying private key: /var/lib/one/.ssh/id_dsa

 debug1: Next authentication method: password


 Which is better I export /var/lib/one to every workers node or manually
 export to each workers?

 Thanks you.

 Regards  Best Wishes,


 *.: Amier Anis :.*
 Mobile: +6012-260-0819
 On Wed, Oct 2, 2013 at 3:40 PM, Valentin Bud valentin@gmail.comwrote:

 Hello Amier,


 On Wed, Oct 2, 2013 at 10:27 AM, Amier Anis myma...@gmail.com wrote:

 Hi valentin,

 Yes, I'm using packaging from opennebula repo and no error during
 install either i created the oneadmin first before install or automatic
 created by the installer.

 yum -y install opennebula-server opennebula-sunstone opennebula-ozones
 opennebula-gate opennebula-flow opennebula-node-kvm


 The opennebula-common package provides the user oneadmin so no need to
 create it manually. The opennebula-common is required by
 opennebula-server so no need to install it manually.



 I also has remove selinux from the system.

 yum -y remove selinux-policy


 Have you rebooted you system