I assume that we should also remove items from the table whenever we
succeed with a request for that key..

Other than that I think that this might be a good thing to have.
/N


> -----Original Message-----
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Toad
> Sent: den 28 oktober 2003 03:49
> To: Discussion of development issues
> Subject: Re: [freenet-dev] Frost and Routing - solution ?
> 
> 
> Implementation proposal:
> 
> One table, including both old and new failure tables.
> 
> Size should be on the order of 50,000 - if this uses 
> significant RAM, it can probably be optimized significantly.
> 
> Each item has:
> 
> Key
> List of
>       Hops to live
>       Time
> 
> (this is so that the classic failtable is accurate, but we 
> don't duplicate the key unnecessarily; these items would be 
> deleted when they go out of date, except for the most recent 
> one which is kept)
> 
> Number of hits
> 
> Now, when we get a request:
> 
> If the key is not in the failtable, or the HTL is higher than 
> it was last time, we handle the request completely normally.
> 
> If we are within the failtime of a valid hops-to-live:time 
> pair, DNF the request immediately, as we do now.
> 
> If the number of hits is greater than or equal to X (probably 
> 2), set a flag on the Routing object, and then route the 
> request normally. The flag will cause the Routing object not 
> to report the DataNotFound to the estimators or the main 
> statistics, but all other information will be (for example 
> failure time).
> 
> 
> On Tue, Oct 28, 2003 at 02:41:41AM +0000, Toad wrote:
> > Another possibility: do we want to limit this to files that 
> have had a 
> > certain number of failed requests, for example 2?
> > 
> > This is very interesting, mainly because not only does it give us 
> > better stats, *it gives us more accurate pDNF estimators too!*.
> > 
> > On Tue, Oct 28, 2003 at 02:01:26AM +0000, Toad wrote:
> > > Possible reasons not to do it:
> > > 1. Can it be exploited by hostile nodes?
> > > I don't see how, but does anyone have any ideas?
> > > 
> > > 2. Do we want Frost traffic to be routing-neutral?
> > > Frost traffic is a large fraction of total Freenet 
> traffic. If this 
> > > entire fraction does not affect the estimators - or does 
> not affect 
> > > pDNF
> > > - then it is load without function, the pDNFs will rely on the
> > > "straight" traffic, which is a much smaller proportion... 
> Any drawbacks
> > > worse than not doing it?
> > > 
> > > 3. Do we want successful requests that were in the SFT to 
> impact the
> > > estimators?
> > > Probably yes...
> > > 
> > > 
> > > I'm broadly supportive, but I think it needs to be 
> considered rather 
> > > carefully...
> > > 
> > > A long term solution is to implement passive requests, 
> but I'm not 
> > > sure we have the security implications fully thought out, and I'm 
> > > not sure we want to implement radical new features like 
> that at this 
> > > point.
> > > 
> > > On Tue, Oct 28, 2003 at 01:45:36AM +0000, Toad wrote:
> > > > On Mon, Oct 27, 2003 at 10:58:37AM +0200, Jusa Saari wrote:
> > > > > Found this on the Frost boards, and decided to post it, since 
> > > > > the problem was discussed on this board some time 
> ago. Removed 
> > > > > the attached original article, since it was already 
> posted here.
> > > > > 
> > > > > And yes, there is still a problem in unstable, because
> > > > > 1) My node hasn't shown any kind of specialization
> > > > > 2) The propability of success of incoming request is very low 
> > > > > (0.02 max)
> > > > > 3) The pDNF for all the nodes in my routing table is very high
> > > > > 
> > > > > *****
> > > > > 
> > > > > I recently posted an article about Frost's effect on routing 
> > > > > (attached to the bottom of this message for your 
> convenience), 
> > > > > in which I suggested the recent problems with Freenet were 
> > > > > caused by Frost's tendency to request nonexistent keys (see 
> > > > > bottom of message for details). I also offered a number of 
> > > > > solutions, all of which would either have serious 
> drawbacks or 
> > > > > be quite difficult to implement.
> > > > > 
> > > > > After thinking about it a little, I've come up with another 
> > > > > possible solution, and can't find any serious 
> drawbacks in it at 
> > > > > a quick glance. Please review and comment (forward to dev 
> > > > > mailing list if possible, since my previous post also got 
> > > > > there).
> > > > > 
> > > > > 
> > > > > I suggest adding a secondary failure table to Fred. 
> This SFT is 
> > > > > very large (as large as possible), and the keys on it 
> don't have 
> > > > > a fixed time to last (it's discarded on exit, thought, since 
> > > > > there's not much point keeping it). The node uses it 
> as follows:
> > > > > 
> > > > > 1) If a request results in DNF, check if the key is 
> already in 
> > > > > the SFT. If not, add the key (if the table is full, discard a 
> > > > > random key from it) and behave as usual. If the key is in the 
> > > > > SFT, don't change the pDNF for the node returning it 
> (consider 
> > > > > it a non-legitimate DNF).
> > > > > 
> > > > > 2) If a request results in data being found, check if 
> the key is 
> > > > > in the SFT. If so, remove it. Then proceed as normal.
> > > > > 
> > > > > 
> > > > > When Frost starts requesting the same key again and again (in 
> > > > > hopes of someone having inserted a message under it), 
> only the 
> > > > > first failure is taken into consideration in routing 
> decisions. 
> > > > > Any subsequent requests are still routed, but they can't mess 
> > > > > routing information anymore. Then, when someone 
> actually inserts 
> > > > > under the key, and one of these requests succeed, the key is 
> > > > > proved to exist, and thus gets taken out of SFT, because any 
> > > > > further DNF's for it are obviously legitimate.
> > > > > 
> > > > > 
> > > > > This simple algorithm can be improved many ways; the two most 
> > > > > obvious ones would be to count the times a key in SFT is 
> > > > > requested and remove whatever key has the lowest one 
> instead of 
> > > > > random one (in the "table full" scenario), and to maintain a 
> > > > > "whitelist" of keys we have seen (and which are thus proven 
> > > > > exist and any failures to get them should allways be 
> considered 
> > > > > a legitimate DNF (and shouldn't be added to SFT, of course)).
> > > > > 
> > > > > Comments ?
> > > > 
> > > > Ok, so the proposal:
> > > > 
> > > > Keep the current failure table. It should probably be made very 
> > > > large.
> > > > 
> > > > Create a large secondary failure table. Keys in this table will 
> > > > still be routed, but are not counted for statistical 
> purposes, nor 
> > > > do they affect estimators, making the psuccess more 
> accurate, but 
> > > > meaning a large fraction of traffic is simply not 
> counted in the 
> > > > psuccess at all, and we will be making "disposable" requests, 
> > > > which don't affect the estimators.
> > > > 
> > > > 
> > > > Hrrm. This is very interesting. Anyone see an obvious 
> reason not 
> > > > to do it?
> > 
> > --
> > Matthew J Toseland - [EMAIL PROTECTED]
> > Freenet Project Official Codemonkey - http://freenetproject.org/
> > ICTHUS - Nothing is impossible. Our Boss says so.
> 
> 
> 
> > _______________________________________________
> > Devl mailing list
> > [EMAIL PROTECTED]
> > http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl
> 
> -- 
> Matthew J Toseland - [EMAIL PROTECTED]
> Freenet Project Official Codemonkey - http://freenetproject.org/
> ICTHUS - Nothing is impossible. Our Boss says so.
> 

_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to