The missing elements in Reload are well known in the DHT "industry":
1. Defining neighbor proximity by * Delay * Number of routing hops * Bandwidth 2. Algorithms for fair disk usage, etc. 3. Recovery of temporarily "failed" neighbors that may have been overloaded due to congestion in the access link or high CPU usage due to cryptography. The above are key ingredients and another reason to decouple the DHT layer form the application layer; SIP in our case. This decoupling is one of the reasons why I believe P2PP should be the common starting base for P2PSIP and not Reload for sure. In Reload (1) routing is duplicated in the DHT and the SIP layers and (2) coupled for too strong from benefiting of advanced DHT development. Henry -----Original Message----- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Bruce Lowekamp Sent: Monday, March 10, 2008 6:58 PM To: jiangxingfeng 36340 Cc: [email protected] Subject: Re: [P2PSIP] Random resource probe On Mar 10, 2008, at 6:48 PM, jiangxingfeng 36340 wrote: >> Let me try parsing this another way. >> If the Service location is just a normal lookup, returning normal >> responses, then it clearly needs no support from the P2P layer we >> are >> defining. > Right. If the standard service name method works in all cases, > there is no need to develop new methods. But in some cases, this > method has some shortcomings: > 1. if too much peers providing the same service, and also publish > the information to the responsible peer fo the service-id gotten by > hashing the standard service name. So the responsible peer has to > store so much <key, value> pair; > > 2. the other shortcoming is if the service is a popular service, > all query will go to the responsible peer. It may overload the > responsible peer. This isn't how the service discovery algorithms work (either reload, redir, or any of the others I can think of). ReDiR, in particular, is adaptive. In ReDiR, the service providers and the queriers gauge the population density of the service and use that to select where to search for the service. There isn't overload on a single responsible peer. In the technique in reload, the data is stored at random locations and random queries are used to search, so again, there is no overload on a single peer. > > >> If pure random probes suffice, then that's enough. > Maybe we need collect some experiemental data to see whethere > random probes could work in most cases. I mean, if the service > providers could be got within two or three transactions, IMHO, it > is a reasonable latency. > The technique in reload works fine if you can reasonably accurately predict the server population. If you can't, it won't work very well. Bruce _______________________________________________ P2PSIP mailing list [email protected] https://www.ietf.org/mailman/listinfo/p2psip
_______________________________________________ P2PSIP mailing list [email protected] https://www.ietf.org/mailman/listinfo/p2psip
