Cancel my last email as I peeked at a server I set up last year w/o issue
having multiple interfaces. Its working no issue.
I don't recall but can you gentlemen tell me if there are any routes that need
to be set?
My guest VMs being on a 2nd or 3rd NIC interface can't get a IP via DHCP and
On 22/11/13 17:11, aurfalien wrote:
Sorry guys, I've tried and tried, no dice.
Seems like I am missing missing a vent1, vnet2, etc... to br0 association.
I can see were the vnet# gets created upon VM startup.
And based on how my VM xml file is set, will go to either br0, br1. br2,
On Nov 22, 2013, at 3:51 PM, Digimer wrote:
On 22/11/13 18:11, aurfalien wrote:
Cancel my last email as I peeked at a server I set up last year w/o issue
having multiple interfaces. Its working no issue.
I don't recall but can you gentlemen tell me if there are any routes that
need to
Stay out of udev if you can. It's often overwritten by component
addition and manipulation MTU is parsed, and overridden, by options in
/etc/sysconfig/network-scripts/ifcfg-[device]. I find it much safer to
read and manage there, and if new devices are added or replaced, the
behavior is dominated
Hi,
I seem to lack a vnet to bridge device.
When I go to change my interface on the VM using the GUI, I do not see
an option for Host device vnet# (Bridge 'br6')
Instead I see host device eth6 (Bridge 'br6') So before creating one via;
brctl addif...
Let me explain my config;
On Nov 21, 2013, at 2:24 PM, Digimer wrote:
I'm not sure what you are asking.
You should not see the vnetX devices from the VM (or even the VM's
definition file). They're created as needed to link the VM's interface
to the bridge. Think of them as simple network cables.
Some of the
On 21/11/13 17:32, aurfalien wrote:
On Nov 21, 2013, at 2:24 PM, Digimer wrote:
I'm not sure what you are asking.
You should not see the vnetX devices from the VM (or even the VM's
definition file). They're created as needed to link the VM's interface
to the bridge. Think of them as
On Nov 21, 2013, at 2:36 PM, Digimer wrote:
On 21/11/13 17:32, aurfalien wrote:
On Nov 21, 2013, at 2:24 PM, Digimer wrote:
I'm not sure what you are asking.
You should not see the vnetX devices from the VM (or even the VM's
definition file). They're created as needed to link the VM's
On 21/11/13 17:42, aurfalien wrote:
On Nov 21, 2013, at 2:36 PM, Digimer wrote:
On 21/11/13 17:32, aurfalien wrote:
On Nov 21, 2013, at 2:24 PM, Digimer wrote:
I'm not sure what you are asking.
You should not see the vnetX devices from the VM (or even the VM's
definition file).
On Nov 21, 2013, at 2:45 PM, Digimer wrote:
The 'vnetX' number doesn't relate to the interface, bridge or anything
else. The vnetX number is a simple sequence that increments each time a
VM is started. So don't think that you need 'vnet6'... it can be anything.
The 'brctl show' output from
I was under the impression that the relevant MTU settings were on the
*node's* local ifcfg-eth* configurations. Did something change with
KVM internal networking in the last year?
On Thu, Nov 21, 2013 at 1:03 PM, Digimer li...@alteeve.ca wrote:
The problem is that there are no ifcfg-vnetX config
On 21/11/13 18:20, aurfalien wrote:
On Nov 21, 2013, at 2:45 PM, Digimer wrote:
The 'vnetX' number doesn't relate to the interface, bridge or anything
else. The vnetX number is a simple sequence that increments each time a
VM is started. So don't think that you need 'vnet6'... it can be
This is int4eresting stuff. I do note that the virt-manager tool,
and NetworkManager, give *no* insight and detailed management
sufficient to resolve this stuff. Note also that dancing through all
the hoops to get this working, end-to-end, is one of the big reasons
that most environments refuse
It's not so much hard as it is knowing all the hops in your network. If
anything along the chain has a low MTU, the whole route is effectively
reduced.
On 21/11/13 20:20, Nico Kadel-Garcia wrote:
This is int4eresting stuff. I do note that the virt-manager tool,
and NetworkManager, give *no*
What you do in the VMs does not impact the hosts, so I didn't speak to
that. Having the bridge, interfaces, switches and vnets at 9000 (for
example) doesn't immediately enable large frames in the virtual servers.
It simply means that all of the links between the VM and other devices
on the network
I wrote this last year. I've found no other description that lays out
the difficulties of KVM bridges, tagged VLAN's, and pair bonding.
https://wikis.uit.tufts.edu/confluence/display/TUSKpub/Configure+Pair+Bonding,+VLANs,+and+Bridges+for+KVM+Hypervisor
I'm not working for that university
On 20/11/13 19:04, aurfalien wrote:
Hi,
Wondering if this is the proper bridging technique to use for Centos6+KVM;
http://wiki.centos.org/HowTos/KVM
Before I embark on this again, I would like to do it by the book.
Thanks in advance,
- aurf
Personally, I do this:
On Nov 20, 2013, at 4:13 PM, Digimer wrote:
On 20/11/13 19:04, aurfalien wrote:
Hi,
Wondering if this is the proper bridging technique to use for Centos6+KVM;
http://wiki.centos.org/HowTos/KVM
Before I embark on this again, I would like to do it by the book.
Thanks in advance,
-
On Nov 20, 2013, at 4:44 PM, Digimer wrote:
On 20/11/13 19:25, aurfalien wrote:
On Nov 20, 2013, at 4:13 PM, Digimer wrote:
On 20/11/13 19:04, aurfalien wrote:
Hi,
Wondering if this is the proper bridging technique to use for Centos6+KVM;
http://wiki.centos.org/HowTos/KVM
Before
On Nov 20, 2013, at 4:47 PM, Digimer wrote:
On 20/11/13 19:47, aurfalien wrote:
On Nov 20, 2013, at 4:44 PM, Digimer wrote:
On 20/11/13 19:25, aurfalien wrote:
On Nov 20, 2013, at 4:13 PM, Digimer wrote:
On 20/11/13 19:04, aurfalien wrote:
Hi,
Wondering if this is the proper
On 20/11/13 20:49, aurfalien wrote:
On Nov 20, 2013, at 4:47 PM, Digimer wrote:
On 20/11/13 19:47, aurfalien wrote:
On Nov 20, 2013, at 4:44 PM, Digimer wrote:
On 20/11/13 19:25, aurfalien wrote:
On Nov 20, 2013, at 4:13 PM, Digimer wrote:
On 20/11/13 19:04, aurfalien wrote:
Hi,
21 matches
Mail list logo