On 10/11/2010 17:25, Tom Eastep wrote:
> On 11/10/10 7:38 AM, Ruth Ivimey-Cook wrote:
>> On 10/11/2010 04:38, Mahesh B Gupta wrote:
>>> We have two ethernet cards, where the data should be sent to two
>>> ethernet cards all the time to provide 200% data. if one is dropped
>>> other shall be used with 100%data rate. I understood from your
>>> explanation that.. we can route all the data to both the interfaces
>>> with the
>> You are talking about ethernet bonding, which is quite possible with
>> both Windows and Linux, but the implementation varies considerably. It
>> is also outside the scope of this mailing list. Google is your friend here.
> I'm under the impression that bonding only sends packets over a single
> link, rather than all links. I agree that bonding is a logical solution
> to this problem but it is not quite what Mahesh's customer is asking for.

Perhaps I have misunderstood the need, but with bonding, you use the 
bandwidth of two links to achieve twice the maximum data rate of one 
link. If the requirement is to send the same packets over multiple 
ethernet links, duplicating the streams, then bonding isn't the 
solution, but I don't know of any potted solution that will do this: it 
is of course possible, but I strongly suspect that kernel coding is 
needed. And why would this be good?

Back on bonding, for any individual packet it will only use one link, 
but overall, both are used at once. A possible implementation would be 
to send even-numbered packets over link1 and odd-numbered over link2, 
although that is very simplistic. Bonded links are usually be bonded at 
both ends in the same way.

See: http://en.wikipedia.org/wiki/Channel_bonding

Most implementations of bonding result in a new virtual network 
interface being created and at least an implication that the component 
interfaces must not be used independently. You can therefore use 
shorewall etc on top of the result without needing to know that it is 
bonded, which is why I said it was out of the scope of the list.

One flaw of bonding is that there is an assumption that the computer has 
the capacity to use and benefit from the additional bandwidth, which is 
not necessarily true. For example, a 100Mbps link running on PCI boards 
is nearly capable of saturating the processor's PCI bus, and adding new 
PCI boards will not help unless they live on alternate bus. Similarly, 
CPU and disk performance is likely to be more limiting to throughput 
than the speed of a 1Gbps link - my own setup uses PCI-e and 1Gps links 
thoughout and cannot saturate it even with very fast CPUs and disk at 
both ends. Use of highly tuned hardware and software can overcome this 
to some degree but the tuning should come first, and bonding is just one 
minor part of that. [Of course, if you're bonding something as slow as a 
modem line then these concerns are unfounded.]

HTH,

Ruth



------------------------------------------------------------------------------
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book "Blueprint to a 
Billion" shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev
_______________________________________________
Shorewall-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/shorewall-users

Reply via email to