Another possibility: do we want to limit this to files that have had a
certain number of failed requests, for example 2?

This is very interesting, mainly because not only does it give us better
stats, *it gives us more accurate pDNF estimators too!*.

On Tue, Oct 28, 2003 at 02:01:26AM +0000, Toad wrote:
> Possible reasons not to do it:
> 1. Can it be exploited by hostile nodes?
> I don't see how, but does anyone have any ideas?
> 
> 2. Do we want Frost traffic to be routing-neutral?
> Frost traffic is a large fraction of total Freenet traffic. If this
> entire fraction does not affect the estimators - or does not affect pDNF
> - then it is load without function, the pDNFs will rely on the
> "straight" traffic, which is a much smaller proportion... Any drawbacks
> worse than not doing it?
> 
> 3. Do we want successful requests that were in the SFT to impact the 
> estimators?
> Probably yes...
> 
> 
> I'm broadly supportive, but I think it needs to be considered rather
> carefully...
> 
> A long term solution is to implement passive requests, but I'm not sure
> we have the security implications fully thought out, and I'm not sure we
> want to implement radical new features like that at this point.
> 
> On Tue, Oct 28, 2003 at 01:45:36AM +0000, Toad wrote:
> > On Mon, Oct 27, 2003 at 10:58:37AM +0200, Jusa Saari wrote:
> > > Found this on the Frost boards, and decided to post it, since the problem
> > > was discussed on this board some time ago. Removed the attached original
> > > article, since it was already posted here.
> > > 
> > > And yes, there is still a problem in unstable, because
> > > 1) My node hasn't shown any kind of specialization
> > > 2) The propability of success of incoming request is very low (0.02 max)
> > > 3) The pDNF for all the nodes in my routing table is very high
> > > 
> > > *****
> > > 
> > > I recently posted an article about Frost's effect on routing (attached to
> > > the bottom of this message for your convenience), in which I suggested the
> > > recent problems with Freenet were caused by Frost's tendency to request
> > > nonexistent keys (see bottom of message for details). I also offered a
> > > number of solutions, all of which would either have serious drawbacks or
> > > be quite difficult to implement.
> > > 
> > > After thinking about it a little, I've come up with another possible
> > > solution, and can't find any serious drawbacks in it at a quick glance.
> > > Please review and comment (forward to dev mailing list if possible, since
> > > my previous post also got there).
> > > 
> > > 
> > > I suggest adding a secondary failure table to Fred. This SFT is very large
> > > (as large as possible), and the keys on it don't have a fixed time to last
> > > (it's discarded on exit, thought, since there's not much point keeping
> > > it). The node uses it as follows:
> > > 
> > > 1) If a request results in DNF, check if the key is already in the SFT. If
> > > not, add the key (if the table is full, discard a random key from it) and
> > > behave as usual. If the key is in the SFT, don't change the pDNF for the
> > > node returning it (consider it a non-legitimate DNF).
> > > 
> > > 2) If a request results in data being found, check if the key is in the
> > > SFT. If so, remove it. Then proceed as normal.
> > > 
> > > 
> > > When Frost starts requesting the same key again and again (in hopes of
> > > someone having inserted a message under it), only the first failure is
> > > taken into consideration in routing decisions. Any subsequent requests are
> > > still routed, but they can't mess routing information anymore. Then, when
> > > someone actually inserts under the key, and one of these requests succeed,
> > > the key is proved to exist, and thus gets taken out of SFT, because any
> > > further DNF's for it are obviously legitimate.
> > > 
> > > 
> > > This simple algorithm can be improved many ways; the two most obvious ones
> > > would be to count the times a key in SFT is requested and remove whatever
> > > key has the lowest one instead of random one (in the "table full"
> > > scenario), and to maintain a "whitelist" of keys we have seen (and which
> > > are thus proven exist and any failures to get them should allways be
> > > considered a legitimate DNF (and shouldn't be added to SFT, of course)).
> > > 
> > > Comments ?
> > 
> > Ok, so the proposal:
> > 
> > Keep the current failure table. It should probably be made very large.
> > 
> > Create a large secondary failure table. Keys in this table will still be
> > routed, but are not counted for statistical purposes, nor do they affect
> > estimators, making the psuccess more accurate, but meaning a large fraction
> > of traffic is simply not counted in the psuccess at all, and we will be
> > making "disposable" requests, which don't affect the estimators.
> > 
> > 
> > Hrrm. This is very interesting. Anyone see an obvious reason not to do
> > it?

-- 
Matthew J Toseland - [EMAIL PROTECTED]
Freenet Project Official Codemonkey - http://freenetproject.org/
ICTHUS - Nothing is impossible. Our Boss says so.

Attachment: signature.asc
Description: Digital signature

_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to