On Fri, 2007-07-27 at 15:36 +1000, Gavin Carr wrote: > > What I'd _really_ like, though, (and haven't found any explicit > references > to yet) is like (3) but actually duplicating packets down multiple > links, a > sort of 'network raid 1' where (3) is network raid 0. In other words, > something that transparently splits a stream into multiple duplicate > streams > down separate links, which are then merged/multiplexed at the other > end, and > duplicates discarded. Effectively trading bandwidth for latence, > given > multiple links. > > Anyone heard of anything at all like that? Or am I crazy?
For a single direction: Standard end host behaviour is to discard duplicates; which can occur naturally (though not commonly). As for duplicating the outbound packets onto multiple links, you can probably roll-your-own with a little bit of userspace hackery; I'd start with the LARTC as a starting point. -Rob -- GPG key available at: <http://www.robertcollins.net/keys.txt>.
signature.asc
Description: This is a digitally signed message part
-- SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/ Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html
