> Hello - I wonder if you could submit this message to the dev list for
> discussion?  Given my current circumstances, anonymity is my preferred
> mode for communication.  Thanks very much.
> 
Fair enough.

> Greetings.  As a newcomer to Freenet I am excited by its possibilities
> but discouraged by its apparent difficulties.
>
As are we all.

> Here is a possibly dumb idea for a "silver bullet" that could help with
> the overloaded condition.  It may be stupid, but is it stupid enough to
> be right?
> 
I'll see what I can do to explain what I don't like about your scheme.

> In some sense, overload is a self-perpetuating state.  A node cannot
> respond to messages if all the nodes it is querying are not responding.
> So each node stops responding since there is nothing it can do.
> 
yes.  It's self-prepetuating in more ways than this, because routing
becomes inefficient when requests don't go to the right place.

> However of course there are certain messages it can deal with: those
> with HTL=0.  It would not be forwarding those anyway, so it could respond
> to those right away no matter how badly overloaded the other nodes are.
> 
(HTL=0 is an error.  For the rest of my reply, I'm translating 0->1 and 1->2)

At the moment, nodes (that aren't too overloaded) check to see if they
would automatically refuse a request because of the failure table or
if they'd respond with the data before rejecting the connection.  So
there is a little of this going on right now.  And *really* overloaded
nodes reject the incoming connection, so there's not even a chance to
check if the request is HTL=1.  

Also, your assumption that HTL=1 requests don't get forwarded is
incorrect.  To help against datastore probing via HTL=1 requests,
there's about a 22% chance that the HTL won't be decremented and the
request will live another step.  This doesn't completely destroy the
idea, it's just a thorn in its side.

> So we could give HTL=0 messages priority so that they are handled first.
> This would help slightly.
> 
> Then there are HTL=1 messages.  These will typically require forwarding.
> And if we have made the above change, then when we decrement the HTL
> to 0 and forward them, the other nodes will respond even if they are
> overloaded.  So we can handle HTL=1 messages reasonably well, and they
> should have a relatively high priority.
> 
This all seems reasonable.

> You can see where I am going with this.  Messages should be handled
> in priority inversely to HTL.  HTL=0 messages should always be handled
> immediately.  HTL=1 should be handled with high priority.  HTL=2 with
> somewhat less, and so on.
>
There's not much of a priority system in the current code.  I'll
address this later, though.

> This would have the beneficial side effect of penalizing people who send
> in messages with too high HTL, which hurt the network by making it into
> a flat, unspecialized broadcast network.
> 
It's not quite that.  High-htl requests are necessary for the network.
In simulations done by Oskar, he found an average HTL less than 10 was
needed for requests to succeed.  But he also found that there was a
really high standard deviation in the HTL needed for a request to
succeed.  It's my theory that the high htl requests are needed to
spread data around so that it's found closer to where nodes are
looking for it.

> The problem with the idea is that it's not clear how to put a priority
> system on top of what is essentially a first come, first served model.
> However it seems that most nodes are not operating in that state now,
> but are rejecting almost all messages.  If we are in the state where we
> are rejecting messages because of "overload", let us never reject HTL=0
> and reduce the chances of rejecting low HTL messages proportionately to
> the HTL value.
> 
This is what I think is the biggest problem with the idea, and the
idea of rejecting messages based on any sort of criteria about which
ones the node thinks it'll be able to serve better.  If you randomly
reject messages that don't meet your standards, you're either going to
be below capacity or queueing requests.

If you're rejecting messages you have the resources to handle, you're
not behaving effciently, and we need as much efficiency as possible.
If you're queueing requests, you're introducing latency which just
makes requests stay in the network longer, taking up resources on all
the hops that request has gone through, and making the network less
responsive.

> This could help drain the network of backed-up messages and deadlocked
> loops and could free things up to a significant degree.  Maybe it's
> worth a try?
> 
Honestly, I think that moving to NIO should be our first priority.
Once we've done that, I'm going to push for a complete rethink of the
connection management; It seems nice to open and close connections
whenever you want, and it's great that the recieving end is able to
handle messages coming through any connection, but that style of
connection management can't do a simple "up/down" status for
neighboring nodes.  If nodes are only checked for being up when a
request comes in for them, valuable routing time is wasted trying to
open a connection to them.  Also, when people first start up their
node, it can take many requests before the node even attempts to
contact all other nodes it's been seeded with, leaving people in the
dark as to whether their node is going to become part of the network
or not.

Yes, there's a practical loss when you only forward to nodes you have
open connections to.  But with 30-40 open connections, that loss is
going to be minimal, and the benefits in efficiency will be amazing.

Thelema
-- 
E-mail: thelema314 at bigfoot.com                        Raabu and Piisu
GPG 1024D/36352AAB fpr:756D F615 B4F3 BFFC 02C7  84B7 D8D7 6ECE 3635 2AAB

_______________________________________________
devl mailing list
devl at freenetproject.org
http://hawk.freenetproject.org/cgi-bin/mailman/listinfo/devl



Reply via email to