On Fri, Oct 17, 2014 at 4:21 PM, Frank Horowitz <[email protected]> wrote: > G’Day folks, > > Long time lurker. I’ve been using Cero for my home router for quite a while > now, with reasonable results (modulo bloody OSX wifi stuffola). > > I’m running into issues doing zfs send/receive over ssh across a (mostly) > internet2 backbone between Cornell (where I work) and West Virginia > University (where we have a collaborator on a DOE sponsored project. Both > ends are linux machines running fq_codel configured like so: > tc qdisc > qdisc fq_codel 0: dev eth0 root refcnt 2 limit 10240p flows 1024 > quantum 1514 target 5.0ms interval 100.0ms ecn
So, in re-reading this now, I think I now grok that that A) Frank is using cerowrt at home, and B) he has it hooked up on two cool boxes connected on either side of Internet2, and B) is what the question was about. so my general assumption is that his boxes are x86, and hooked up at 1GigE or so to the Internet2. So answers A) If after a big transfer If a tc -s qdisc show dev eth0 # on both sides shows no drops or ecn marks, his linux servers are not the bottleneck link, and he should use mtr to find the real bottleneck link elsewhere, during that transfer. IF, after a big transfer drops are seen, you are (at least some of the time) a bottleneck link. Enable ecn between both tcps. And if you are willing to tolerate more latency on the link, feel free to increase the target and interval to values you are more comfortable with, but you won´t increase actual bandwidth by all that much. Personally I suspect ¨A¨ as the problem. And as per my original msg, it always helps to measure, and the rrul test between the two points is the best thing we got. > I stumbled across hpn-ssh <https://www.psc.edu/index.php/hpn-ssh> and — of > particular interest to this group — their page on tuning TCP parameters: > > <http://www.psc.edu/index.php/networking/641-tcp-tune> > > N.B. their advice to increase buffer size… > > I’m curious, what part (if any) of that advice survives with fq_codel running > on both ends? Most of that seems to apply to TCPs. I would suspect that enabling TCP pacing between the two points might be helpful, but without data on whatever problem(s) you are experiencing on your path, can´t help. Amusingly, matt mathis is one the original authors of that page, and perhaps he has new advice. > Any advice from the experts here would be gratefully received! > > (And thanks for all of your collective and individual efforts!) > > Cheers, > Frank Horowitz > > > _______________________________________________ > Cerowrt-devel mailing list > [email protected] > https://lists.bufferbloat.net/listinfo/cerowrt-devel > -- Dave Täht thttp://www.bufferbloat.net/projects/bloat/wiki/Upcoming_Talks _______________________________________________ Cerowrt-devel mailing list [email protected] https://lists.bufferbloat.net/listinfo/cerowrt-devel
