Hash: SHA1

On 02.10.2014 23:05, Morgan McLean wrote:
> I'm actually having issues across the board...maybe its because I'm
> going against the design ideals but I was told that in order to use
> the local storage on the box and sacrifice VM migration
> capabilities, all machines had to be in their own datacenter. Pain
> point number one.

Yes it's painful to use local storage, that's true, but will
eventually change with 3.6 or something.

> Second pain point is adding additional networks aside from the
> initial management network doesn't seem to be very intuitive. I'm a
> network engineer for a living...and I can't figure it out.

Well I'm sure I can assist with that one even if I'm no network
engineer ;)

> Third pain point is ovirt management UI requires a restart semi
> frequently due to crashing.

You should submit bugs for crashes, I have never seen a UI crash
in ovirt, not even in 3.2. Which version are you using and what are
you doing when it crashes?

> I added the network under the networks tab, but I noticed it
> doesn't ask any interface questions (ok, it could technically
> attempt arp on all interfaces like a citrix netscaler).

Well, the UI is a little cluttered, so I have to ask:
Which network tab?
You define a logical network on a datacenter level
you then have to add this to the logical cluster in this datacenter.
It should get added by default to the logical cluster in fact, and
it should be marked as "required" (which is a bad default, I have a BZ
tracking this).
You need then to manually add this logical network to your host.
You can further adjust your networking by adding network profiles.

In short: the whole network management takes place in ovirt!
You do _not_ need to configure anything on the compute node/host
yourself. You can do bonding, bridging, v-lans, port mirroring, qos
and so on.

> I notice theres no ifconfig changes on the host.
There will not be any if you didn't apply any logical networks to the
host in the first place. Furthermore it's important to know your ovirt
version because there was a change in recent vdsm versions regarding
the path where network configuration is stored. (I don't recall that
path out of my head but others may provide the info here)
 I also notice that the other interface has a bridge named exactly
> as the management name in the UI; ovirtmgmt, which is bridged to
> eth0. OK, so I create a bridge called utility (my network name) and
> mapped it to a vlan tagged interface I had setup on eth1.21.
> Everything looks identical to how ovirt setup the initial network.
> The interface itself works, I can see things on the network etc.
> Traffic passes.

I guess you tried to define the network yourself on the host?
First wrong step :) jut add your logical networks to the host, either
through UI or via API.

Here are some docs you should read:


> Trying to run the VM results in errors because its failing network
> filters, says it doesn't exist etc. All I want to do, is create a
> VM with some memory, some disk, with a nic on a network, across
> individual machines. My PXE provisioning will take over from there.
> Whats the best way to do this? Sorry for all the questions -- your
> guys' response is great, and I really appreciate the help thus
> far.
> The exact message I get is:
> Error while executing action: utiltest:
> - Cannot run VM. There are no available running Hosts with all the 
> networks used by the VM. - Cannot run VM. There is no host that
> satisfies current scheduling constraints. See below for details: -
> The host load3.pod1.########.com did not satisfy internal filter 
> Network

Of course, you did not attach the logical network you defined for your
datacenter to your host (compute node) through ovirt.



Version: GnuPG v2

Users mailing list

Reply via email to