On 31/03/14 19:50, Arne Babenhauserheide wrote:
> Am Sonntag, 30. März 2014, 20:41:41 schrieb Matthew Toseland:
>> If we ensure that only nodes
>> with a proven track record of performance (or at least bandwidth) route
>> high HTL requests or participate in tunnels, we can slow down MAST
>> significantly. (Inspired by the "don't route high HTL requests to
>> newbies" anti-fast-MAST proposal).
> If that’s the only requirement, then the fix is trivial: Each node records 
> for its connections, whether they fulfill the requirements for high-HTL 
> opennet nodes.
Great minds think alike. ;) I posted a much more complex proposal, but
there may be a quick fix (which provides some limited benefit)...
> For example it could route high-HTL requests only to nodes which have at 
> least 1/4th of its uptime*average bandwidth or are among the 1/4th of the 
> nodes with the highest uptime*average bandwidth (choose the best match from 
> that subset of the nodes).
Or even simpler, have a minimum connection uptime. If a node has stayed
in your connection list for a long continuous period, then it has
presumably performed reasonably well for you, or you would have dumped
it in favour of a different node. Of course there are issues around what
the minimum uptime should be... IMHO on nodes with sufficient uptime we
should aim to have a minimum peer connected time over a specific
threshold (e.g. 3 hours), because this represents a specific measurable
commitment of bandwidth from the attacker.
> As bandwidth, ideally only count successfully returned data (so a node cannot 
> appear to be high bandwidth by just doing many requests or returning garbage).
>
> The big advantage of this is that it requires no global state at all.
>
> That would also have a few beneficial side-effects:
>
> - High uptime nodes are likely to be well-connected. So requests should be 
> less likely to be stuck in badly connected clusters.
> - For new nodes this is essentially random-routing the first steps.
Why?

Random routing the first few steps is a good idea anyway, but there are
statistical attacks similar to MAST (just more expensive) that are
possible (easier if you are able to connect to a large proportion of the
network).
> - The effects of churn on the network are reduced, because the requests 
> quickly get into the well-connected cluster.
Is that an advantage or a disadvantage? I've always thought the
"aristocracy" effect was a bad thing? Maybe it isn't...
> The bad side-effect would be that attacks using long-lived, high-bandwidth 
> nodes would become easier. For those attacks, the network would effectively 
> be half as large. But those attacks are expensive, and someone who wants to 
> do do those attacks effectively has to provide a backbone for freenet which 
> increases privacy for anything which is not being attacked right now.
The big question is what effect would this have on routing? (And load
management / capacity for that matter)? That's why I haven't deployed it
as a quick fix for MAST (it was first proposed way back), but maybe we
can figure it out ...

You might argue that high HTL requests are so far away from the target
that it doesn't matter. But ideally we'd like "high HTL" to be "before
it reaches the ideal node", i.e. it stays in the core for the first 5-7
hops (while MAST is feasible) and then goes off to all the newbies. The
catch is, after that point it's scraping the bottom of the barrel... On
my node, 35% of successful requests are at HTL of 14 or higher, so
possibly this isn't a serious objection, given that the capacity of the
core nodes is probably a large fraction of the total capacity anyway
(especially if we take into account uptime issues)...

Also, what are your thoughts on broadcasting to peers (efficiently and
slowly, i.e. not waiting for answers except on the final node) after we
reach the ideal node?

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
Devl mailing list
Devl@freenetproject.org
https://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to