> > > His analysis applies to any large-scale p2p network.  There are at least
> > > two defenses: either create some sort of certification authority (perhaps
> > > a supervisory p2p network) or allow/encourage fragmentation of the target
> > > network.
> >
> > Come now, This is not impossible. GNUnet does it. And does it well. I posted a
> > way to adapt this to Freenet's architecture a while back. It can be done. It
> > just requires a big code over hall.
> 
> You might be disagreeing with the conclusions of the paper on Sybil.  If
> so, have you read the paper?  If so, which conclusion are you disagreeing
> with?

I am disagreeing with the paper. Not in it's conclusions but in it's premises.
 
> Or you might be saying that Freenet could create a CA.  If so, can you be
> more specific?

Not create a CA, but act as one. They state that a CA is necessary to prevent cancer 
nodes from making multiple identity's to insure privacy and prevent a group of nodes 
from attacking the network. I'll think you'll agree with me when I say that in terms 
of data storage Freenet does an excellent job insuring security despite not trusting 
the node with the data. If they think they could brute force a CHK I welcome them to 
try. In terms of responsibility for storing data, nobody is responsible in Freenet. 
They could pretend to be 100000 nodes and then collect lots of data and delete it and 
nobody would care. The only way they got the data was to cache it in the first place.

> Or you might be saying that Freenet could allow or encourage network
> fragmentation.  Are you?

No I am not.

There are still two other arias where cancer nodes can be a problem. First is 
flooding. This is what the GNUnet model solves. Here's the short version: If you give 
each node credit proportional to the amount of time they saved you by processing a 
request through them as opposed to someone else, and then allow them to use that 
credit towards your spending time processing their requests, then you don't need any 
outside authority. Both nodes know they are not being cheated. If they are then they 
don't process the requests. Simple as that. Now how does one build up credit in the 
first place? Simple. If CPU, network bandwidth or hardDrive space are not being used 
at any particular time, they go to waste. So even if a node has 0 credit you'll still 
process their request if you have idle resources. Thus you gain credit with them.
This way no node can do more damage than The Amount of Benefit they have previously 
provided to the network + the slack resources in the network + the CPU required to 
check and then drop N requests. That's as good as it gets anywhere.

The only problem this does not solve is if a node does a good job of processing 
requests over all, but always drops a single key. Freenet cannot truly solve this 
problem, because there is no way to know that they really should have had the data. 
BUT a central authority cannot solve this problem ether! The only way to do so would 
be for it to know where all the data on the network was stored. AND have all the 
requests routed and returned through it. Otherwise a node could claim it did not 
receive the data when it did. I don't think I need to explain why this is not a viable 
solution.
_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to