Hello Colin,
I know well all the equipment you have in your hands as I used to work
with these during a long time. Great stuff I can say.
All seems Ok from what you describe, except the iSCSI network which
should not be a bond, but two independent vlans (and subnets) using
iSCSI multipath. Bond works, but it's not the recommended setup for
these scenarios.
Fernando
On 24/06/2016 22:12, Colin Coe wrote:
Hi all
We run four RHEV datacenters, two PROD, one DEV and one
TEST/Training. They are all working OK but I'd like a definitive
answer on how I should be configuring the networking side as I'm
pretty sure we're getting sub-optimal networking performance.
All datacenters are housed in HP C7000 Blade enclosures. The PROD
datacenters use HP 4730 iSCSI SAN clusters, each datacenter has a
cluster of two 4730s. These are configured RAID5 internally with
NRAID1. The DEV and TEST datacenters are using P4500 iSCSI SANs and
each datacenter has a cluster of three P4500s configured with RAID10
internally and NRAID5.
The HP C7000 each have two Flex10/10D interconnect modules configured
in a redundant ring so that we can upgrade the interconnects without
dropping network connectivity to the infrastructure. We use fat RHEL-H
7.2 hypervisors (HP BL460) and these are all configured with six
network interfaces:
- eno1 and eno2 are bond0 which is the rhevm interface
- eno3 and eno4 are bond1 and all the VM VLANs are trunked over this
bond using 802.1q
- eno5 and eno6 are bond2 and dedicated to iSCSI traffic
Is this the "correct" way to do this? If not, what should I be doing
instead?
Thanks
CC
_______________________________________________
Users mailing list
[email protected]
http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________
Users mailing list
[email protected]
http://lists.ovirt.org/mailman/listinfo/users