On Wed, 21 Feb 2001, Dan B wrote:
> If you can afford it, do 4 1000gbps network cards in FastEtherChannel
> (called bonding/trunking) which will give you network bandwidth. After you
> convert bits to bytes, this will give you 500mBps, which is what your RAID
> card is capable of. (Of course, you need a switch that has 4 gbps ports
> and is capable of FEC).
A work of caution: in a typical PC mainboard, the PCI bus might become a
bottleneck; it's theoretical bandwidth is 132 MB/s, but because it's a
shared bus where communication is done in bursts, the congestion doesn't
usually allow you to use more than 1/3-1/2 of it. Even more, the
congestion is usually increased when you have many devices, as opposed to
few (but higher bandwidth) devices.
Folks on the beowulf list say that even one Gigabit Ethernet might not
reach its peaks, let alone bonding several of them.
In any case, you should try to get one the new ServerWorks based
mainboards (PCI is 64bit/66MHz) if you want to stuff so many in it.
> ... Combine that with 8 cheap intel pro100 cards in FEC, and
> you have a 800mbps file server.
Also from the beowulf list: bonding 2 or 3 FE NICs gets you the multiplied
bandwidth, but the 4th is in most cases a failure - either you get just a
bit more than 3x speedup or you get _less_ than 3x speed-up. It's still
unclear if this is a limitation from NICs, PCI bus, drivers or something
else. (AFAIK, this was tried only with normal 32-bit/33MHz PCI buses, but
with several cards/drivers: 3Com 905B/C, 4-port Zynx (2114x based), Intel
EEPro/100). If you do have a different experience, please share it!
Sincerely,
Bogdan Costescu
IWR - Interdisziplinaeres Zentrum fuer Wissenschaftliches Rechnen
Universitaet Heidelberg, INF 368, D-69120 Heidelberg, GERMANY
Telephone: +49 6221 54 8869, Telefax: +49 6221 54 8868
E-mail: [EMAIL PROTECTED]
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]