On Thu, 2003-08-07 at 21:27, Ian Clarke wrote:
> On Thu, Aug 07, 2003 at 09:21:43PM -0400, Scott Young wrote:
> > The problem with that is that it's hard to tell when the upstream
> > connection is maxed out.
> 
> Well, yes if you mean the actual upstream connection, but I mean the 
> permitted upstream bandwidth allocated to Freenet.

Well, what if a node sets it to unlimited?  What if the node is
configured with an appropriate value, but starts up KaZaA and KaZaA hogs
a lot of the badwidth?  In those cases the node has less bandwidth than
it thinks it has, so the QueryRejecting won't happen, and we're back to
where we started.

> > Shouldn't NGR solve this?  NGR would be like looking going to the
> > fastest server you know of.  If that server all of a sudden is slower,
> > try another server next time.  Nobody wants to frequently go back to a
> > restaurant with poor service.
> 
> Yes, but NGR needs to learn who is the fastest server, if the servers 
> are constantly getting and finishing with customers then this will 
> fluctuate too quickly for NGR to track.

I guess you could be right... if the entire network is doing NGR then
there might still be a general tendency for too many connections to be
opened.  In that case, many of the nodes with NGR might still end up in
the same situation we're in now, but maybe just a little better.  So
NGR != Global Optimum.



So what precisely is it that we're aiming for?  Rank these in order of
best to worst:

1. 200 connections with 0.1 k/sec each
2. 10 connections with 2 k/sec each
3. 4 connections with 5 k/sec each
4. 1 connection with 20 k/sec

If we assume the exact number of connections listed above can remain
open at any given time (i.e. as soon as one request finishes, another
one starts at the same speed), then the fourth one is probably the
best.  It would handle the same number of QPH, but transfers data much
faster.  Although, 4 won't work well because the assumption is false. 
And also, what is the cost to overall network latency when nodes are
QueryRejecting most requests so that the minimum number of connections
can be achieved?

So what I'm saying is that too far to either end is bad for the network
as a whole.  What should happen is some type of balancing mechanism that
automatically balances toward the best spot between many slow
connections and fewer fast connections.




_______________________________________________
devl mailing list
[EMAIL PROTECTED]
http://hawk.freenetproject.org:8080/cgi-bin/mailman/listinfo/devl

Reply via email to