> Timeouts are going to be challenging no matter what. Even a small > message may take several seconds if links are crappy or the overlay is > large. (Somebody mentioned that search times in existing overlays, > such as Azereus, can routinely exceed 10 seconds and call setup > latencies in Skype are pretty long.) > > I suspect we'll need to recommend a more-or-less random value and high- > performance implementations will need to implement algorithms to tune > the timeout based on the measured properties of the overlay. > Unfortunately, time outs will matter mostly in less-than-ideal > networks, rather than a data center LAN. > I agree. Maybe let the peer do some diagnostic actions to gather the measured data of the overlay and based on the data, the timeout value could be adjusted. For example, a peer could use PING or other messages to calculate the RTT for a request, even there is a late response for the request, i,e, the response could not find the corresponding request transactions, because timer fires and transaction are destroyed.
By maintaining the maximum RTT may be helpful to choose a reasonable timeout. However, in some cases, the timeout value will be limited by the upper layer application, for instance, the user hope to get the information from the overlay within a fixed time. _______________________________________________ P2PSIP mailing list [email protected] https://www.ietf.org/mailman/listinfo/p2psip
