Hi, The approach I would take to solve this is to write my own script that reads the information from /mnt/context/context.sh and configures the second IP for eth0. You can send this script along with the init.sh in the FILES attribute.
The script would be as simple as: ifconfig eth0:1 $IP_PUBLIC/$NETMASK up regards, Jaime On Thu, Oct 18, 2012 at 1:09 PM, Lim Kean Meng <[email protected]> wrote: > I think I found why not any ip just bind to eth0, the ip must be defined > in context.sh. In other words, the vm’s network do not recognize any other > ip except the one define in context.sh, see below:**** > > ** ** > > Q: since I’ll need 2 ip to bind to the same interface on eth0, how do I > force the 2nd ip to be registered into context.sh?**** > > ** ** > > root@one-dev04:/srv/cloud/one/var/262# more context.sh**** > > # Context variables generated by OpenNebula**** > > BROADCAST="10.4.104.255"**** > > DNS="192.228.137.100"**** > > FILES="/var/lib/one/vm-templates/ONE-centos/centos-init.sh"**** > > GATEWAY="10.4.104.254"**** > > HOSTNAME="CentOS-6.2-x64"**** > > IP_PUBLIC="10.4.104.119"**** > > NETMASK="255.255.255.0"**** > > NETWORK="10.4.104.0"**** > > TARGET="hdb"**** > > **** > > ** ** > > Thanks and best regards.**** > > ** ** > > ** ** > > Lim**** > > ** ** > > *From:* Lim Kean Meng > *Sent:* Wednesday, 17 October, 2012 4:50 PM > *To:* '[email protected]' > *Subject:* keepalived: problem implementing virtual ip**** > > ** ** > > I want to setup a load balance to point to 2 VM in opennebula using > keepalived, refer to ww.keepalived.org**** > > I am having problem when implementing virtual ip or vip. ** ** > > The vip which is actually an arbitrary ip in the same network can bind to > my VM eth0 (this is the master server) and if the master is down, the vip > will float and bind to the standby VM eth0.**** > > ** ** > > But the vip cannot be ping from other VM except the VM that is binding the > vip as below. Here my vip is 10.4.104.88 and my master and standby VM are > 10.4.104.28 and 10.4.104.91 respectively, available below.**** > > ** ** > > [root@DW-LB01-253 ~]# ping 10.4.104.88**** > > PING 10.4.104.88 (10.4.104.88) 56(84) bytes of data.**** > > 64 bytes from 10.4.104.88: icmp_seq=1 ttl=64 time=0.127 ms**** > > ^C**** > > --- 10.4.104.88 ping statistics ---**** > > 1 packets transmitted, 1 received, 0% packet loss, time 760ms**** > > rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms**** > > ** ** > > [root@DW-LB02-254 ~]# ping 10.4.104.88**** > > PING 10.4.104.88 (10.4.104.88) 56(84) bytes of data.**** > > ^C**** > > --- 10.4.104.88 ping statistics ---**** > > 1 packets transmitted, 0 received, 100% packet loss, time 804ms**** > > ** ** > > [root@DW-LB01-253 ~]# ip add sh eth0**** > > 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state > UP qlen 1000**** > > link/ether 02:00:0a:04:68:1c brd ff:ff:ff:ff:ff:ff**** > > inet 10.4.104.28/24 brd 10.4.104.255 scope global eth0**** > > inet 10.4.104.88/32 scope global eth0**** > > inet6 fe80::aff:fe04:681c/64 scope link**** > > valid_lft forever preferred_lft forever**** > > ** ** > > [root@DW-LB02-254 ~]# ip add sh eth0**** > > 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state > UP qlen 1000**** > > link/ether 02:00:0a:04:68:5b brd ff:ff:ff:ff:ff:ff**** > > inet 10.4.104.91/24 brd 10.4.104.255 scope global eth0**** > > inet6 fe80::aff:fe04:685b/64 scope link**** > > valid_lft forever preferred_lft forever**** > > ** ** > > ** ** > > Thanks and best regards.**** > > ** ** > > ** ** > > Lim**** > > ** ** > > _______________________________________________ > Users mailing list > [email protected] > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org > > -- Jaime Melis Project Engineer OpenNebula - The Open Source Toolkit for Cloud Computing www.OpenNebula.org | [email protected]
_______________________________________________ Users mailing list [email protected] http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
