Roger Hayter wrote:

In message <[EMAIL PROTECTED]>, Martin Stone Davis <[EMAIL PROTECTED]> writes

Roger Hayter wrote:

In message <[EMAIL PROTECTED]>, Martin Stone Davis <[EMAIL PROTECTED]> writes

<SNIP>

Roger Hayter wrote:

Specialization:
---------------
I guess the idea is that if there are 10 areas of specialization, and we know 5 that specialize in the first area, but all 5 regularly QR, then we begin diluting the specialization of the other nodes which aren't specialized in the first area. Did I state the problem

Not entirely, suppose we start with 10 nodes which are only 0.1% specialised due to random variations in what they have cached, and are never likely to get more specialised.


Why aren't they likely to get more specialized? Can you draw out your example a bit more?


Well, if speed of rejection favours a node many times more than what it is good at retrieving, then there is little or no selection pressure in favour of a given specialisation. I am suggesting there is a threshold of selection pressure below which it has no effect, because random events in the node have a bigger effect on specialisation than any net specialisation of requests, over relevant time periods.

I see. It would be nice to have some sort of measure of "specialization pressure", both theoretically (somehow in terms of number of nodes, number of keys being transferred, etc.) and practically (in terms of things that we could measure at our own node). That reminds me of my eSpecializationRelevance (see the thread: "Measuring node/network health"), which is supposedly a measure of how much our specialization should matter to other nodes.







If so, I would say the problem here is that all 5 nodes specialized near this key are hares. If the hares become burdened with QR:s, shouldn't NGR lead to us eventually favoring a sixth (tortoise) which also becomes specialized?

Yes, but the tortoise would be favoured for the whole keyspace, and not be very good at that, so soon go out of use again.


In my hypothetical, the tortoise we find only needs to be good at giving a success for the keyspace near the target key. Why would that one tortoise be favored by us for the whole keyspace?


If it had been previously neglected so much it could QR everything it received quicker than the (overwhelmed) "fast" nodes could at that time. This might be a bigger effect than any specialisation present.

Yes, but read the rest of what I said:




Also, how could any one node be *that* wonderful at returning sucesses for the entire keyspace? If it were, yes, it would eventually go out of use (due to being overloaded) for *some* of the nodes which use it, but as most nodes back-off from it, the few that continued to use it would do so for whatever keys they had found it to be best for, leading to it becoming specialized.

So even though the tortoise would start out not very specialized, it might become specialized due to the few nodes that end up using it heavily. Or at least that is the hope.


I think that certainly NGR will give rise to specialization in an active network *in the long run* (as time goes to infintity). The real question are: how long? and how does the amount of time depend on the activity and size of the network. Those are probably best answered by theory/simulations, which I am not equipped to do. Another key question is: how much is specialization even needed in any given network? Certainly, some networks need it more than others.

My eHealth et al (again, see the thread: "Measuring node/network health") were intended to start getting answers to some of these questions. More and better measures are possible, too, I am sure. Got some? :)




Altruism:
---------
I think the problem with any altruistic strategy is that it then lets non-altruistic nodes get the better of us, and hordes of people would choose a non-altruistic hack of freenet. Also, if we take your altruistic strategy, couldn't it be argued that we would be unnecessarily slowing down the tortoises even more? NGR takes care of load balancing naturally.

This, I have no answer to (but there may be one - there is some information revealed by the non-altruistic strategy).

not sure sure what you mean by "some information revealed"



I am assuming an altruistic node would send a set of requests (to the given node) with a more specialised bias than a non-altruistic node.

Ah, I think I see. You note that a given node has a potential to become good in a certain area, so you give it even more requests in that area than you normally would, hoping that it will then *really* specialize and be extra good for you to use later in that same area, thus paying off your altruism.


Fine, but whenever you altruistically send a request to a node (A) where another node (B) would have given you a faster success, you also prevent another node (C) from sending a request to its best candidate, A. So, wouldn't your so-called altruism would actually hurt the network? For shame!

-Martin


_______________________________________________ Devl mailing list [EMAIL PROTECTED] http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to