On 01/04/14 19:20, Robert Hailey wrote:
> On 2014/04/01 (Apr), at 1:12 PM, Matthew Toseland wrote:
>
>> On 31/03/14 19:50, Arne Babenhauserheide wrote:
>>> Am Sonntag, 30. März 2014, 20:41:41 schrieb Matthew Toseland:
>>>> If we ensure that only nodes
>>>> with a proven track record of performance (or at least bandwidth) route
>>>> high HTL requests or participate in tunnels, we can slow down MAST
>>>> significantly. (Inspired by the "don't route high HTL requests to
>>>> newbies" anti-fast-MAST proposal).
>>> If that’s the only requirement, then the fix is trivial: Each node records 
>>> for its connections, whether they fulfill the requirements for high-HTL 
>>> opennet nodes.
>>>
>>> For example it could route high-HTL requests only to nodes which have at 
>>> least 1/4th of its uptime*average bandwidth or are among the 1/4th of the 
>>> nodes with the highest uptime*average bandwidth (choose the best match from 
>>> that subset of the nodes).
>>>
>>> As bandwidth, ideally only count successfully returned data (so a node 
>>> cannot appear to be high bandwidth by just doing many requests or returning 
>>> garbage).
>>>
>>> The big advantage of this is that it requires no global state at all.
>>>
>>> That would also have a few beneficial side-effects:
>>>
>>> - High uptime nodes are likely to be well-connected. So requests should be 
>>> less likely to be stuck in badly connected clusters.
>>> - For new nodes this is essentially random-routing the first steps.
>>> - The effects of churn on the network are reduced, because the requests 
>>> quickly get into the well-connected cluster.
>>>
>>> The bad side-effect would be that attacks using long-lived, high-bandwidth 
>>> nodes would become easier. For those attacks, the network would effectively 
>>> be half as large. But those attacks are expensive, and someone who wants to 
>>> do do those attacks effectively has to provide a backbone for freenet which 
>>> increases privacy for anything which is not being attacked right now.
>> IMHO the routing effects are fairly serious, but solvable:
>>
>> When we add a peer we send it low HTL, specialised requests; after a few
>> hours we start sending it high HTL requests which are much further away
>> from our location. This may cause the peer's success rate to drop and it
>> to get dropped in favour of a newbie. Then it comes back ... we get
>> "flapping".
> This sounds like a good idea.
>
> In keeping with tradition, I would presume that even a newbie node will get a 
> high htl request if we run out of established nodes.
>
> If that is the case, my only suggestion would be a partial rewrite of that 
> getNearestPeer() function, as it is becoming a bit unwieldily... there is 
> surly a better way to represent such telescoping fallback rules.
Yeah it is rather ugly. I don't want to introduce a turing complete
language though, and there *are* concurrency issues... It should be
possible to improve it somehow...

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
Devl mailing list
Devl@freenetproject.org
https://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to