Claer wrote, sometime around 15/07/08 07:31:
On Mon, Jul 14 2008 at 28:15, Mart?n Coco wrote:

Thanks!

Have you tried the quad nics on those Dells? We do have a couple of R200s, 860s and 850s running with 2 dual port cards no problem, but we have never tried the quad ports.
Hello,

I do have around 20 Dell 860 and R200 with 2 cards Intel Quad ports.
That is a total of 10 interfaces on those cheap Dell.

You'll never hit any problem if you use only one Quad port. Be careful
with 2 cards on 860. You'll have to order "Intel PRO/1000 PT Quad Port"
and *NOT* the "Low profile" one. For the moment, no issues with them.

I run a pair of HP DL320 G5 boxes as a pair of failover gateways (pf/isakmpd/ospfd/dhcpd) and have an Intel Pro/1000 PT quad port card in each, giving me 6 interfaces. The onboard ethernet controller is bge, and the intel ones are em. I use the onboard for a crossover link between the two gateways, and then the other 4 connections are split into 2 bonded pairs.

One is a plain old bond to a separate network and the other bonded pair has 5 VLANs running over it. Carp's used on all the links, pretty much, and it works great.

I haven't performed any particularly scientific performance tests, but I did push ~800Mbit/s using iperf through them, from what I recall.

If you were to stick two of the cards in, you'd need one full height and one low profile, as only one of the PCIe slots on the DL320 is full height. You'd also need to make sure you ordered the right version of the server (I think you can get it with one PCIe and one PCI-X slot as well as two PCIe slots).

I'm still not sold on the benefits of bonding when you have a failover pair of gateways, but we had the budget for the extra ports, so why not? It gives me room to expand by breaking the bonds if necessary.

Next task is to fix munin (or replace with something else) so that I can actually get bandwidth stats graphed.

--
Russell Howe, IT Manager. <[EMAIL PROTECTED]>
BMT Marine & Offshore Surveys Ltd.

Reply via email to