On Tuesday 23 September 2003 07:03 pm, Todd Walton wrote:
> On Tue, 23 Sep 2003, Tom Kaitchuck wrote:
> > On Tuesday 23 September 2003 12:22 am, Todd Walton wrote:
> > > > If there are other nodes on the network that you trust, you could
> > > > exchange this information with them, and have a much better idea of
> > > > their specialization. Also this could also provide a basis for a
> > > > non-alchemistic gage for what data should put in the store and what
> > > > should be removed.
> > >
> > > You switched horses.  Some Guy was talking about routing times, and
> > > you're talking about specialization.  If you're talking about swapping
> > > specialization info with other nodes, then why not just look in the
> > > local datastore at what keys there are?  You wouldn't need to do what
> > > Some Guy was talking about.
> >
> > NGrouting provides the mechanism for where the data requests come from,
> > doesn't it make seance to use those same mechanism to determine where it
> > is stored? Yes, strictly speaking it is not necessary, as you can already
> > compute relative specialization and popularity of data, however
> > ultimately you are trying to approximate what you will be seeing requests
> > for in the future, and the time estimates provide that.
>
> It still doesn't work.
>
> The data you collect on yourself, even if you collect it using standard
> NGRouting methods, is *not* the same data that other nodes will collect on
> you.  Those other nodes are at a different place in the network than you
> are.  They have to deal with the speed of the link between them and you,
> and you can't necessarily know what that speed is.(*)  Also, even if a
> given piece of data takes forever for you to find, that may be fine with
> the requesting node.  You may be their last hope.  Really, the only one
> who can determine who best to route to is yourself.  Nobody else is in the
> same spot as you, in the network.

So what? If you give another node information about your routing times it 
knows they are not going to be BETTER than that, so it provides a base 
estimate, and it automatically knows your specialization.

> (*) Someday, hopefully, Freenet will work with networks less predictable
> and reliable than TCP.
>
> The other facet to this problem is that even if you could compile the very
> same data as another node, and come to the very same decision about your
> routing ability as the other node did, you don't know how your performance
> compares to the other nodes that it knows about.

Why do you have to make decisions for it? If you meet a new node and give it 
that information, it has a very good idea of your specialization right away. 
Then that node can assume that if it's requests through you are greater than 
that, it is because of the additional hop going between the two of you. So it 
can have a very clear idea of what routing performance going through you is 
going to look like. Normally this would take a few hundred requests to firmly 
establish, but this would allow it to be done in a dozen or less. If they 
lied about their specialization, and are not as good as they say, then your 
node will assume that is caused by their location relative to theirs, and so 
will interpret that they are universally that much worse, and so in all 
likelihood they will receive almost no traffic.

Even if you don't find that convincing, or if you think it requires giving too 
much trust to another node, then collecting the information is still usefull 
for the data store. It provides a much better means of deciding what to keep 
and what to drop from the store.
_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to