[ovirt-users] Impact of changing VLAN of ovirtmgmt in the Data Center?

2016-02-25 Thread Garry Tiedemann

Hi everyone,

In Data Centers > (Name) > Networks section of the Ovirt GUI, network 
definitions include the VLAN IDs.
In my case, the VLAN ID of ovirtmgmt has been empty (meaning VLAN 1) 
since I built it; it's always been wrong.


My hypervisor hosts' ovirtmgmt bridges are actually in VLAN 20.

An error message alerted me to this mismatch a few days ago.

None of my production VMs are on VLAN 1, or VLAN 20, but I'd like to 
confirm if it's safe to change this.


Can changing the VLAN ID of ovirtmgmt within Data Center > Networks 
impact VMs from other VLANs?


I see no reason why it should be a problem. We just need some certainty 
on that point.


Thankyou in advance for answers.

Regards,

Garry

PS This is a briefer and clearer re-statement of the question I asked a 
couple of days ago.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] management network VLAN mismatch

2016-02-23 Thread Garry Tiedemann

Hi folks,

I'm in the process of upgrading to 3.6, and have encountered a new error 
message. I understand the error, and how to fix it.
My question is, can fixing it affect production VM traffic? I don't 
think so, but I'd appreciate confirmation from someone who really knows, 
please.


Description of what I am doing:
I've upgraded hosted-engine, and am now upgrading the first hypervisor node.
My method, which may be outdated, is: have no VMs running on the host, I 
drop it out of the cluster, do yum updates & reboots, then add the host 
back to the cluster when it's ready.


When I add the host to the cluster, I get this error message:
Failed to configure management network on host bl05. Host bl05 has an 
interface eth0.20 for the management network configuration with VLAN-ID 
(20), which is different from data-center definition (none).


I know what this means, and will explain it for the benefit of others. 
It means that:
* the ovirtmgmt bridge on the host is attached to a VLAN 20 
subinterface. That is by design; it is correct.
* However, in Data Center > Networks definition of ovirtmgmt, there is 
no VLAN tag defined, so my hosted-engine believes that ovirtmgmt should 
go into in VLAN 1.

Also, for completeness:
* there is PVID 20 on the adjacent switchport, which means "untagged 
ingress traffic should be tagged into VLAN 20". This is why my ovirtmgmt 
network has been working.
* I am grateful for the error message now appearing, as it shows me 
where I have misconfigured this a long time ago.


So. To fix this, I need to add VLAN 20 tag in ovirtmgmt in Data Center > 
Networks.
My question: If I add the VLAN 20 tag to ovirtmgmt in Data Center > 
Networks, is there any possibility of breaking traffic (or VMs) on other 
(production) VLANs?
From my general knowledge, I cannot see that would be the case, but I 
would be grateful for an informed opinion from someone who has tried it 
before please.


Best regards to all,

Garry

*
*


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Moving hosted engine storage

2014-05-08 Thread Garry Tiedemann
Hi guys,
Has anyone successfully moved hosted-engine before?
I built 3.4 hosted-engine on CentOS a couple of weeks back, over gluster NFS 
with a keepalived, per Andrew's procedure. I want to switch to kernel NFS on 
localhost:/hosted-engine, and retain gluster underneath, just for mirroring.
Should have been simple, drop gluster NFS, mount the glusterfs locally on each 
host, alter hosted-engine.conf, fire up hosted-engine. Not so good, the 
all-zeros storage pool just wouldn't come online. I have logs, though anyone 
who recognises this would know it without. 
I have a few other vms running, isolated on one host for convenience. A rebuild 
is not out of the question.
Would appreciate any pointers from the community.
Regards to all,
Garry

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] iso-uploader with hosted-engine

2014-04-29 Thread Garry Tiedemann

Hi guys,

I built my hosted-engine with three nodes (3.4, CentOS), and built a 
Gluster ISO domain, connected it up, all good.


Until I come to engine-iso-uploader, it is present on the hosted-engine 
but not on the VMs.
And my gluster subnet is not routed. So the VM can't see it to upload 
ISOs into.


How are other people getting around this?

I can see:
1. Obvious - make the Gluster LAN routed. It wouldn't hurt, just for 
transferring a few ISOs.
2. Add a second NIC to the hosted-engine VM, so that it can see the 
gluster LAN

3. put engine-iso-uploader on one of the nodes, is that possible?

My preference is to connect hosted-engine to the Gluster LAN.

Is it possible to add another NIC to the VM, after it's built? (I don't 
want to rebuild if avoidable; I've done that enough times).
I know that the ovirt GUI isn't aware of the first NIC on the VM, so I 
haven't dared to try adding a NIC there. Is that safe?


Thanks in advance for your thoughts on this.

cheers,

*Garry
*


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users