Cancer nodes currently pose a serious threat to the network. However stopping 
them is not simple, especially an anti-specialization attack, as discussed 
previously. My previously proposed solution would not work, because someone 
could find a hash that works and then start subtracting values until they 
find XXX. Toad pointed this out as a problem with SSKs, but it is a problem 
with CHKs too.

I have a better solution.
Take the index key that things are located under now. Split it into two parts. 
Hash each part. Then start incrementing one part and hashing it each time 
until the last few digits are the same as the hash of the second part. Then 
data is routed biased on the hash of the resulting two hashes. This is 
basically the same as my previous proposal, except that by hashing the key 
after it is split, someone cannot work backwards.

However this can never be a total solution because it just increases the 
amount of work before the attack can start, it does not increase the amount 
of work to actually execute it.

As toad correctly pointed out disconnecting from cancer nodes is not a total 
solution. This is because any mechanism that relies on negative trust can be 
circomvented, by simply contacting lots of nodes. Also one cannot introduse a 
means for nodes to share some sort of list of nodes that they don't trust, 
because it relies upon trusting those nodes in the first place, and if you 
could trust them, you wouldn't need the list.

So, what Freenet needs in order to fully thwart cancer node attacks is a 
decentralized positive trust biased model. One existing model for how to do 
this in GNUnet. It implies a very smart trust biased network, and can place 
hard limits on the amount of damage that can be done by an attacker. This 
could make both flooding attacks and anti-specialization attacks on freenet 
impossible.

Consider the following:
Suppose we have a system where each node has a particular level of trust in 
each other node in it's table. Then each time a node makes a request it risks 
a level of trust that is specifies. If the first node successfully returns 
the data it may dock it UP TO that amount from it's trust. Then the other 
node gives the first one the amount of trust that it specified. Then each 
node always tries to selfishly gain as much trust as possible. This way the 
requests of the highest priority go through, and inflation is prevented as 
there is a finite amount of resources, thus insuring that node is able to 
take away more from the network than it put it minus the slack in the 
network. (Even requests that gain zero trust will go through if the 
bandwidth/CPU useage for that time would otherwise go to waste)

The way GNUnet does would not work with Freenet. This is that each hop the 
request goes towards the destination each node must decrease the trust it is 
risking on this transaction by a small amount so that it can be sure it 
actually gains something. If all the nodes are doing this and the network is 
MORE likely to drop requests as they get closer and closer to their 
destination. Also it is possible for a node to get shortchanged if one nodes 
trust (because of it's position on the network) is not as valuable to it as 
the next's. The solution to this is nontrivial. First each node would need to 
compute the relative value of each nodes trust. The logical way to do this 
would be to just add trust right in with the NGrouting time estimators, and 
consider value as the total overall decrease in routing time from requests 
comming from your node if you gain that amount of trust. Then rather then 
evaluating incoming requests just biased on the amount of trust we would gain 
if we compleated them, we also need to take into account the relative 
probability of the request actually succeeding.

Then once this is in place it is easy to stop flooding attacks. Currently if 
an attacker was inserting||requesting large ammounts of data to waste network 
bandwidth, they can deprive others of HTL*TheirBandwidth, under this system 
they would not be able to do any more damage than having one other computer 
receive all of those requests (TheirBandwidth). This could also be used to 
prevent anti-specialization attacks, because it is possible to dock the 
requesting node the FULL trust level they put in the request at if it is not 
in the network. So then, unless they have built up a huge amount of trust, 
then the probability of their packets being dropped approaches 1. In order to 
build up enough trust to outweigh the normal traffic, then, on average they 
will have to have processed htl*fakeRequests successful requests previously. 
Even then under load the amount of trust to get their requests through is 
higher, and if their attack becomes successful, then the probability of 
success goes down, and they are even more likely to be dropped. The counter 
argument to this is that any node can process N requests successfully and 
then turn around and send out N/htl requests to another node (For whatever 
reason). However I would say that this is not too serious as it's overall 
contribution to the network is still greater.

Also to successfully prevent a distributed flooding anti-specialization attack 
the failure table needs to be as large as possible. Ideally the failure table 
should only be time dependent. However because there is always a limit to 
resources. However if the limit only comes into play when the network is 
under attack, then it would be better if entries were removed randomly rather 
than in order. In order to make this possible TUKs are necessary, otherwise a 
large failure table could be a problem for some applications.

It should be noted that in order to make TUKs work universally they would need 
to be implemented differently then discussed in the docs. Here is my current 
concept of how they could work:

First TUKs would take over a role that is normally reserved for the manifest. 
When a TUK is requested, if a node has it, it checks the expiration date, and 
if it has passed it passes the request up stream anyway to check if there is 
a newer version available. Then all the nodes in the chain have the latest 
version. The TUK itself is not encrypted and can be modified by the node that 
is holding it. However it is signed with the key that it is inserted under, 
so any modifications must be sanctioned by the original creator. If the 
original creator sends an update it will replace the contents of the TUK, but 
will not be accepted if they attempt to increment the version number by more 
than one, set the expiration time in the past, or alter static content. The 
node that has the TUK stored on it is en charge of enforcing this. So it is 
still possible that the creator could try to make their content less 
acessable by creating a new TUK that has a very high version number and 
keeping it in their data store, and then hoping that they will fall in the 
request line for the data. Then they would effectively hijack their own 
content in the same way that KSKs can be hijacked now. (Although others would 
not be able to hijack it) I don't see any way around this. However if some 
other sights simply kept track of the version number the old content would 
still retrievable. The data in the key itself could contain simply a link to 
the current version, or it could have a version for each of several different 
files. (assuming of course that those can fit in an SSK) It could also have 
static content, or data that is listed there under a CHK and does not ever 
change. Finally it could contain other public keys that are allowed to update 
the other parts of the TUK. They should not be able to remove other keys or 
add their own, but they should be able to increment version numbers. This 
would enable Frost boards to have a maintainer who can allow many people to 
post messages and everyone to read them, but also revoke write privileges of 
those that abuse them. Additionally it would allow more sophisticated systems 
for SSK sites where many people can contribute, while not giving any of them 
total access. It should be noted that this (at least from my perspective) is 
secondary. The real purpose of TUKs regardless of how they work should be to 
eliminate the need to have nodes guess keys, and ultimately reduce network 
traffic, as the vast majority of it is used by applications like Frost that 
need to make a lot of otherwise unnecessary queries.

_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to