Thanks for the advice.  This DID work, however I've run into more
(what I consider to be) ridiculousness:

Running hosted-engine setup seems to assume that the host is not
already added to the Default cluster (had an issue previously where I
had renamed the cluster and that messed up the hosted-engine setup
since it appears to look for 'Default' as a cluster name - that seems
like something it should query for vs. assuming it never changed in my

So, since I've added the host already (in order to get storage NICs
configured and be able to get to my storage in order to get to the
hosted engine's VM disk), it won't allow me to add hosted engine to
this node.  This is not a good experience for a customer in my view.
If the host has already been added, then a simple prompt that asks if
that is the case should suffice.

Please don't take my comments as anything more than a customer
experience and not just hateful complaints.  I genuinely believe in
this product, but things like this are VERY common in the enterprise
space and could very well scare off people who have used product where
this process is significantly cleaner.

We need to be able to create storage NICs prior to hosted-engine
setup.  And what's more is that we need to KNOW to not add a node to
the engine if you intend to run a hosted engine on it (and thus allow
the hosted engine setup to add the node to the DC/cluster/etc.).  That
should be very, very clear.



On Wed, Feb 24, 2016 at 8:16 AM, Fabian Deutsch <> wrote:
> Hey Christopher,
> On Tue, Feb 23, 2016 at 8:29 PM, Christopher Young
> <> wrote:
>> So, I have what I think should be a standard setup where I have
>> dedicated NICs (to be bonded via LACP for storage) and well are NICs
>> for various VLANS, etc.
>> As typical, I have the main system interfaces for the usual system IPs
>> (em1 in this case).
>> A couple of observations (and a "chicken and the egg problem"):
>> #1.  The RHEV-H/Ovirt-Node interface doesn't allow you to configure
>> more than one interface.  Why is this?
> This is by design. The idea is that you use the TUI to configure the
> initial NIC, all subsequent configuration will be done trough Engine.
>> #2.  This prevents me from bringing up an interface for access to my
>> Netapp SAN (which I keep on separate networking/VLANs for
>> best-practices purposes)
>> If I'm unable to bring up a regular system interface AND an interface
>> for my storage, then how am I going to be able to install a RHEV-M
>> (engine) hosted-engine VM since I would be either unable to have an
>> interface for this VM's IP AND be able to connect to my storage
>> network.
>> In short, I'm confused.  I see this as a very standard enterprise
>> setup so I feel like I must be missing something obvious.  If someone
>> could educate me, I'd really appreciate it.
> This is a valid point - you can not configure Ndoe from the TUI to
> connect to more than two networks.
> What you can do however is to temporarily setup a route between the
> two networks to bootstrap Node. After setup you can use Engine to
> configure another nic on Node to access the storage network.
> The other option I see is to drop to shell and manually configure the
> second NIC by creating an ifcfg file.
> Note: In future we plan that you can use Cockpit to configure
> networking - this will also allow you to configure multiple NICs.
> Greetings
> - fabian
>> Thanks,
>> CHris
>> _______________________________________________
>> Users mailing list
> --
> Fabian Deutsch <>
> RHEV Hypervisor
> Red Hat
Users mailing list

Reply via email to