On Sun, Aug 03, 2003 at 12:23:18PM -0400, Scott Young wrote: > On Sat, 2003-08-02 at 21:03, Zlatin Balevsky wrote: > > Currently we are seeing very high times after the route is found until > > the message leaves the node. They are in the order of 30 seconds and in > > some cases more than 2 minutes. We can chase the reasons everywhere, > > but the basic truth is that since we RoundRobin to all sending > > connections, the more sending connections we have the slower each one of > > them will be. > > > > It is very common for average to busy nodes to have more than 100 > > sending connections, and values of up to 888 out of 1200 have been > > reported. The latter case was on a T1 with 1MBps uplink, and each one > > of those connections would get 1kb/sec uplink speed. Unfortunately, > > very few people have T1, and the most common uplink cap will be between > > 8 and 20 kb. I propose (and am willing to implement) that we set limits > > on the number of connections that can send at the same time. The scheme > > would be very simple and will guarantee that we don't get these > > ridiculous lags: > > > Another semi-related problem is that nodes sometimes get too much data > in their sendqueues. I think I remember someone on this mailing list > saying that their sendqueue got up to 80 MB. This is simply too large.
Nobody has yet explained to me why a sendqueue of 80MB is a problem. This is the total amount of data to send, including all the files in the datastore. It is not expected to be sent immediately - we probably don't HAVE all the 80MB. > > One solution to this would be to throttle incoming data to the same > speed as outgoing data for transfers that are part of the same request. > That way other (high transfer rate) requests can be handled more > quickly. Although, this requires connection multiplexing to work well > because a transfer between two nodes with a high-speed connection would > be throttled down by slower nodes in the request path. Yuck. So we slow the request down even more. Also, it would mean multiplexing would have to do full blown flow control, which we can otherwise avoid by imposing a maximum file size. > > > Back on topic, why do we default nodes with only 8 to 20 k/sec upload to > 512 connections? 512 seems ridiculously large, even for nodes with 200+ > k/sec max upload rates. I have a cable modem with about 12kb/sec upload, > and my upstream is filled with a max of 30 connections. It does, > however, take a little longer for the node to get up to full speed. I > should probably try fewer connections and see how my node does. Because most of these connections will either be idle or be transferring at a very low rate, since they are limited by the slowest node on the chain. And because it is REALLY expensive to open a new connection. > > Then again, NGR should fix this, since nodes with too many open > connections send stuff slower. Therefore, I think your solution is not > really needed. No, they don't. Nodes with too many transferring connections send stuff slower. -- Matthew J Toseland - [EMAIL PROTECTED] Freenet Project Official Codemonkey - http://freenetproject.org/ ICTHUS - Nothing is impossible. Our Boss says so.
pgp00000.pgp
Description: PGP signature
