On Thu, Jul 26, 2007 at 10:44:19AM +1000, Visser, Martin wrote:
> I think that the technology Gavin is thinking of is more about
> economising on the content being sent rather than tweaking TCP
> parameters. 

Thanks a lot for all the comments in this thread so far - been some
interesting reading. 

So far the thread's covered two areas:

1. tuning TCP parameters for WAN performance (James' comments and links)

2. content compression magic with specialist middleboxes (Martin/Glen and 
   others)

I've done a fair bit of stuff around (1) before (some based on stuff 
Glen's written). In this particular instance I'm more interested in 
latency and interactive behaviour than raw throughput though, and I 
want to minimise exposure to long-haul link congestion and dropouts.

I've found another bunch of work around this:

3. the notion of 'network striping' - opening multiple connections over 
   one or more links to improve window scaling and/or to limit exposure
   issues on a single link e.g.

   - http://www.cesnet.cz/doc/techzpravy/2006/psock/

   - http://portal.acm.org/citation.cfm?id=370413&coll=portal&dl=ACM

   - http://nms.csail.mit.edu/papers/index.php?detail=127

which look pretty interesting for my use case. Anyone have any experience 
with any of these?


What I'd _really_ like, though, (and haven't found any explicit references 
to yet) is like (3) but actually duplicating packets down multiple links, a 
sort of 'network raid 1' where (3) is network raid 0. In other words, 
something that transparently splits a stream into multiple duplicate streams 
down separate links, which are then merged/multiplexed at the other end, and
duplicates discarded. Effectively trading bandwidth for latence, given 
multiple links.

Anyone heard of anything at all like that? Or am I crazy?


Cheers,
Gavin

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

Reply via email to