Hi Jaime,
I did as you said. I specified the same bridge eth0 for both the public and
the private network. I created two VMs with both public and private
interfaces. I could ping private IPs from within the virtual machines.
However I was surprised to see that if I create one VM with both public an
Hi Prakhar,
you need to use the same bridge, both for public and private networks: eth0.
regards,
Jaime
On Wed, Apr 6, 2011 at 9:18 PM, Prakhar Srivastava wrote:
> Hi,
> Thannx for the reply.
> Using two NIC sections per VM (one has eth0 bridge and the other one has
> eth1 as bridge) But I have
Hi,
Thannx for the reply.
Using two NIC sections per VM (one has eth0 bridge and the other one has
eth1 as bridge) But I have a single bridge created on each of my cluster
nodes i.e. eth0. When I insert NIC for private network which has bridge as
"eth1", I get a error that eth1 bridge does not exis
Hi Prakhar,
The scenario you've described is very easily achievable, you only need to
create another private network vnet instance (created with the onevnet
utility) and add two NIC sections per VM, one for the public NIC and one for
the private one.
It has one drawback, though, if you do this yo
Hi,
By private networks, I mean the virtual network created by opennebula
onevnet utility. Consider the scenario where I have 4 VMs running in my
opennebula cloud setup. All of them has a public IP (allocated from a
virtual network created by using onevnet utility) thats accessible from my
network
Hi,
I wanted to set up private networks for VMs in my opennebula setup. Is it
necessary to have two physical NICs on the cluster nodes for setting private
networks. If yes, is there any alternative to it so that I can use my VMs
using their private IPs.
Regards,
Prakhar
___