Scott Young ([EMAIL PROTECTED]) wrote: > Another semi-related problem is that nodes sometimes get too much data > in their sendqueues. I think I remember someone on this mailing list > saying that their sendqueue got up to 80 MB. This is simply too large. > > One solution to this would be to throttle incoming data to the same > speed as outgoing data for transfers that are part of the same request.
I don't think slowing things down *more* is the answer. I'd rather get that data as fast as I can, because then (if my data store is less than 90% full), I'll *cache* it, and then I can send it out at my leisure. > Back on topic, why do we default nodes with only 8 to 20 k/sec upload to > 512 connections? Because keeping an open connection costs very little (with NIO), and opening a new connection is very expensive (cryptographic handshaking). 512 open connections isn't a problem; 512 open connections all sending data *is* a problem. I'd rather have 512 open connections, but only allow 10 or 20 of them to be sending large chunks of data at a time. > Then again, NGR should fix this, since nodes with too many open > connections send stuff slower. Therefore, I think your solution is not > really needed. We'll see. -- Greg Wooledge | "Truth belongs to everybody." [EMAIL PROTECTED] | - The Red Hot Chili Peppers http://wooledge.org/~greg/ |
pgp00000.pgp
Description: PGP signature
