On Mar 10, 2008, at 6:48 PM, jiangxingfeng 36340 wrote:

>> Let me try parsing this another way.
>> If the Service location is just a normal lookup, returning normal
>> responses, then it clearly needs no support from the P2P layer we
>> are
>> defining.
> Right. If the standard service name method works in all cases,  
> there is no need to develop new methods. But in some cases, this  
> method has some shortcomings:
> 1. if too much peers providing the same service, and also publish  
> the information to the responsible peer fo the service-id gotten by  
> hashing the standard service name. So the responsible peer has to  
> store so much <key, value> pair;
>
> 2. the other shortcoming is if the service is a popular service,  
> all query will go to the responsible peer. It may overload the  
> responsible peer.

This isn't how the service discovery algorithms work (either reload,  
redir, or any of the others I can think of).  ReDiR, in particular,  
is adaptive.  In ReDiR, the service providers and the queriers gauge  
the population density of the service and use that to select where to  
search for the service.  There isn't overload on a single responsible  
peer.

In the technique in reload, the data is stored at random locations  
and random queries are used to search, so again, there is no overload  
on a single peer.


>
>
>> If pure random probes suffice, then that's enough.
> Maybe we need collect some experiemental data to see whethere  
> random probes could work in most cases. I mean, if the service  
> providers could be got within two or three transactions, IMHO, it  
> is a reasonable latency.
>

The technique in reload works fine if you can reasonably accurately  
predict the server population.  If you can't, it won't work very well.

Bruce

_______________________________________________
P2PSIP mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/p2psip

Reply via email to