Is anybody using Virtual Router image?

I'm trying to create a vm from OpenNebula Virtual Router image with 2 NICs to route traffic between networks. The context is below, and the problem that vm interfaces are not configured after it started up, although IP are assigned.

CONTEXT=[NETWORK="YES",ROOT_PASSWORD="blahblah",SSH_PUBLIC_KEY="$USER[SSH_PUBLIC_KEY]"]
CPU="1"
DHCP="YES"
DISK=[IMAGE="OpenNebula Virtual Router1",IMAGE_UNAME="oneadmin",READONLY="no"]
DNS="8.8.4.4 8.8.8.8"
FORWARDING="8080:172.16.50.253:80"
GATEWAY="172.16.50.253"
GRAPHICS=[LISTEN="0.0.0.0",TYPE="VNC"]
MEMORY="384"
NIC=[IP="172.16.100.253",NETWORK="Internal apps",NETWORK_UNAME="oneadmin"]
NIC=[IP="172.16.50.220",NETWORK="internal",NETWORK_UNAME="oneadmin"]
NTP_SERVER="172.16.50.253"
OS=[ARCH="x86_64",BOOT="hd"]
PRIVNET="$NETWORK[TEMPLATE, NETWORK=\"Internal apps\"]"
PUBNET="$NETWORK[TEMPLATE, NETWORK=\"internal\"]"
SEARCH="local.domain"
TARGET="hdb"
TEMPLATE="$TEMPLATE"

Log after vm is initiated:

Sun Jan 11 22:44:03 2015 [DiM][I]: New VM state is ACTIVE.
Sun Jan 11 22:44:03 2015 [LCM][I]: New VM state is PROLOG.
Sun Jan 11 22:44:04 2015 [LCM][I]: New VM state is BOOT
Sun Jan 11 22:44:04 2015 [VMM][I]: Generating deployment file: 
/var/lib/one/vms/17/deployment.0
Sun Jan 11 22:44:05 2015 [VMM][I]: ExitCode: 0
Sun Jan 11 22:44:05 2015 [VMM][I]: Successfully execute network driver 
operation: pre.
Sun Jan 11 22:44:05 2015 [VMM][I]: ExitCode: 0
Sun Jan 11 22:44:05 2015 [VMM][I]: Successfully execute virtualization driver 
operation: deploy.
Sun Jan 11 22:44:06 2015 [VMM][I]: post: Executed "sudo ovs-vsctl set Port vnet0 
tag=100".
Sun Jan 11 22:44:06 2015 [VMM][I]: post: Executed "sudo ovs-ofctl add-flow br0 
in_port=18,arp,dl_src=02:00:ac:10:64:fd,priority=45000,actions=drop".
Sun Jan 11 22:44:06 2015 [VMM][I]: post: Executed "sudo ovs-ofctl add-flow br0 
in_port=18,arp,dl_src=02:00:ac:10:64:fd,nw_src=172.16.100.253,priority=46000,actions=normal".
Sun Jan 11 22:44:06 2015 [VMM][I]: post: Executed "sudo ovs-ofctl add-flow br0 
in_port=18,dl_src=02:00:ac:10:64:fd,priority=40000,actions=normal".
Sun Jan 11 22:44:06 2015 [VMM][I]: post: Executed "sudo ovs-ofctl add-flow br0 
in_port=18,priority=39000,actions=drop".
Sun Jan 11 22:44:06 2015 [VMM][I]: post: Executed "sudo ovs-vsctl set Port vnet1 
tag=16".
Sun Jan 11 22:44:06 2015 [VMM][I]: post: Executed "sudo ovs-ofctl add-flow br0 
in_port=19,arp,dl_src=02:00:ac:10:32:dc,priority=45000,actions=drop".
Sun Jan 11 22:44:06 2015 [VMM][I]: post: Executed "sudo ovs-ofctl add-flow br0 
in_port=19,arp,dl_src=02:00:ac:10:32:dc,nw_src=172.16.50.220,priority=46000,actions=normal".
Sun Jan 11 22:44:06 2015 [VMM][I]: post: Executed "sudo ovs-ofctl add-flow br0 
in_port=19,dl_src=02:00:ac:10:32:dc,priority=40000,actions=normal".
Sun Jan 11 22:44:06 2015 [VMM][I]: post: Executed "sudo ovs-ofctl add-flow br0 
in_port=19,priority=39000,actions=drop".
Sun Jan 11 22:44:06 2015 [VMM][I]: ExitCode: 0
Sun Jan 11 22:44:06 2015 [VMM][I]: Successfully execute network driver 
operation: post.
Sun Jan 11 22:44:06 2015 [LCM][I]: New VM state is RUNNING
Could somebody point me in right direction to troubleshoot this?
All OpenNebula components reside on same physical host.

Thanks,
--Roman

_______________________________________________
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

Reply via email to