Roger Hayter wrote:
In message <[EMAIL PROTECTED]>, Martin Stone Davis <[EMAIL PROTECTED]> writes<SNIP>
Roger Hayter wrote:Not entirely, suppose we start with 10 nodes which are only 0.1% specialised due to random variations in what they have cached, and are never likely to get more specialised.
Specialization:
---------------
I guess the idea is that if there are 10 areas of specialization, and we know 5 that specialize in the first area, but all 5 regularly QR, then we begin diluting the specialization of the other nodes which aren't specialized in the first area. Did I state the problem
Why aren't they likely to get more specialized? Can you draw out your example a bit more?
Well, if speed of rejection favours a node many times more than what it is good at retrieving, then there is little or no selection pressure in favour of a given specialisation. I am suggesting there is a threshold of selection pressure below which it has no effect, because random events in the node have a bigger effect on specialisation than any net specialisation of requests, over relevant time periods.
Yes, but the tortoise would be favoured for the whole keyspace, and not be very good at that, so soon go out of use again.
If so, I would say the problem here is that all 5 nodes specialized near this key are hares. If the hares become burdened with QR:s, shouldn't NGR lead to us eventually favoring a sixth (tortoise) which also becomes specialized?
In my hypothetical, the tortoise we find only needs to be good at giving a success for the keyspace near the target key. Why would that one tortoise be favored by us for the whole keyspace?
If it had been previously neglected so much it could QR everything it received quicker than the (overwhelmed) "fast" nodes could at that time. This might be a bigger effect than any specialisation present.
Also, how could any one node be *that* wonderful at returning sucesses for the entire keyspace? If it were, yes, it would eventually go out of use (due to being overloaded) for *some* of the nodes which use it, but as most nodes back-off from it, the few that continued to use it would do so for whatever keys they had found it to be best for, leading to it becoming specialized.
not sure sure what you mean by "some information revealed"
This, I have no answer to (but there may be one - there is some information revealed by the non-altruistic strategy).
Altruism:
---------
I think the problem with any altruistic strategy is that it then lets non-altruistic nodes get the better of us, and hordes of people would choose a non-altruistic hack of freenet. Also, if we take your altruistic strategy, couldn't it be argued that we would be unnecessarily slowing down the tortoises even more? NGR takes care of load balancing naturally.
I am assuming an altruistic node would send a set of requests (to the given node) with a more specialised bias than a non-altruistic node.
-Martin
_______________________________________________ Devl mailing list [EMAIL PROTECTED] http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl
-- Roger Hayter _______________________________________________ Devl mailing list [EMAIL PROTECTED] http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl
