At 02:56 PM 2/22/2001 +0100, Bogdan Costescu wrote:
>On Wed, 21 Feb 2001, Dan B wrote:
>
> > If you can afford it, do 4 1000gbps network cards in FastEtherChannel
> > (called bonding/trunking) which will give you network bandwidth.  After you
> > convert bits to bytes, this will give you 500mBps, which is what your RAID
> > card is capable of.  (Of course, you need a switch that has 4 gbps ports
> > and is capable of FEC).
>
>A work of caution: in a typical PC mainboard, the PCI bus might become a
>bottleneck; it's theoretical bandwidth is 132 MB/s, but because it's a
>shared bus where communication is done in bursts, the congestion doesn't
>usually allow you to use more than 1/3-1/2 of it. Even more, the
>congestion is usually increased when you have many devices, as opposed to
>few (but higher bandwidth) devices.
>Folks on the beowulf list say that even one Gigabit Ethernet might not
>reach its peaks, let alone bonding several of them.
>In any case, you should try to get one the new ServerWorks based
>mainboards (PCI is 64bit/66MHz) if you want to stuff so many in it.

True.  We enjoy building servers on the Intel SPKA4 platform and the 8way 
platform, both of which have several 64-bit/66Mhz slots, and multiple 
busses.  (I think they either licensed ServerWorks or bought the chips).


> > ... Combine that with 8 cheap intel pro100 cards in FEC, and
> > you have a 800mbps file server.
>
>Also from the beowulf list: bonding 2 or 3 FE NICs gets you the multiplied
>bandwidth, but the 4th is in most cases a failure

I haven't experienced that.  I think I'll be joining the beowulf list here 
soon.

>- either you get just a
>bit more than 3x speedup or you get _less_ than 3x speed-up. It's still
>unclear if this is a limitation from NICs, PCI bus, drivers or something
>else. (AFAIK, this was tried only with normal 32-bit/33MHz PCI buses, but
>with several cards/drivers: 3Com 905B/C, 4-port Zynx (2114x based), Intel
>EEPro/100). If you do have a different experience, please share it!

You're right, that's been my experience as well.

But from what I understand of the FEC protocol, the last link (e.g. the 4th 
card out of a 4-channel trunk) isn't a failover.  However, the effect of 
the FEC protocol is that you get about N-1 bandwidth from your connection, 
which is expected (not many things scale linearly).  It is comparable in my 
opinion to processor scalability where 4 processors gives you the combined 
power of about 3 standalone processors.

However, what I really like is the powerful redundancy of FEC.  One could 
lose any of the 4 NIC's in a trunk and it wouldn't miss a beat.  Heck, you 
could lose three NIC's and keep pumping (at lower throughput).

By the way, did you know that scalability isn't a word?  The tech industry 
made it up.  It wont be long before it's in webster's though.


Dan Browning, Cyclone Computer Systems, [EMAIL PROTECTED]

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]

Reply via email to