Oskar Sandberg <oskar at freenetproject.org> writes:

> On Thu, Dec 13, 2001 at 06:53:51PM +0100, Sebastian Sp?th wrote:



> > The solution might be to
> > a) abolish restarted query messages allthogether (thelema) or/and
> 
> You obviously can't have every node in the chain restart the request at 
> more or less the same time.
> 
> > b) time out the request "timeOut" seconds anyway, whether there are 
> > pending restarted queries or not...
> 
> You obviously can't have every node in the chain restart the request at 
> more or less the same time.
> 
No, you just have the query die and that's it.  maybe just change
"queryRestarted" to "Gave up waiting" or so.

> > Does this make sense, or did I simply work too much today?
> 
> No it doesn't. I was obviously aware of this when we first implemented
> the restart. It is not considered a problem because:
> 
> a) It cannot go on "forever" as the HTL is decremented every time the
> timer is restarted, and will eventually reach zero causing a Timeout.
> 
This is not true, as confirmed by scipient's examination of the code
as well as my own personal experience of having a HTL=10 request
return 11 Restarted messages over FCP (which correspond directly to
"queryRestarted", right?) and then fail after a _30 minutes_ of
waiting.

> b) Freenet's structure is handles this form of attack well because Nodes
> that don't respond correctly eventually loose references.
> 
The attack would be to have a node that handled _lots_ of connections
and then sent "restarted" messages for about a half-hour on each
request, and then finally answered the query.  The victim nodes will
not penalize this at all.

> The only way to do this better would be to add limited branching to try
> to ensure some redundancy - something that is on that long list of
> things that should be tried in some later version.
> 
Limited branching can be good.

Thelema

_______________________________________________
Devl mailing list
Devl at freenetproject.org
http://lists.freenetproject.org/mailman/listinfo/devl

Reply via email to