Jouni,

I like the concept of self-tuning DHT algorithms, but I'm a bit concerned about requiring a somewhat complex algorithm to be implemented as the base dht. Two of the goals for the base dht in the current draft were implementation simplicity and reasonable performance across the full scale of environments.

You've raised the question of whether the reactive recovery will cause scaling problems for larger overlays. I hope someone has a chance to look at that question. reload's current dht uses a smaller neighborhood set than most other current DHTs (certainly than pastry), while specifying reactive rather than periodic recovery. The one thing I think we can say for certain is that it's unclear what the tradeoffs of the various options are. I'm not aware of any work that looks at both reactive/periodic combined with different neighborhood set sizes and whether just the neighborhood or the entire routing table is maintained with which algorithm. And certainly those works don't address the appropriateness of their algorithms in small-scale overlays.

My belief has been that reactive recovery with the small neighborhood set is OK. But I also believe that reactive recovery for failures could be avoided if the neighborhood is bigger (still needs to propagate new peers faster, though). Would be interesting to compare.

Whatever we decide the requirements are, we're going to have to study the algorithm across a range of environments before we commit to it.

Bruce




Jouni Mäenpää wrote:
Hi,

As pointed out in [1], DHT-based overlays usually need to be configured
statically (e.g. by assuming a certain churn level and network size).
The problem is setting the parameters so that the overlay achieves the
desired reliability and performance even in challenging conditions, such
as under heavy churn. This naturally results in high cost in the common
case or alternatively, poor performance or even network partitioning in
worse than expected conditions. This kind of hand-tuning rarely works in
real settings because it requires perfect knowledge about the future.

In my opinion, the P2PSIP WG should specify a self-tuning DHT algorithm
 which would adapt to the operating conditions (e.g. network size and
churn rate). This would make it possible to use the DHT in a wide range
of environments instead of e.g. only small-scale low-churn networks.

Any comments?

Jouni

[1] R. Mahajan, M. Castro, and A. Rowstron. Controlling the cost of
reliability in peer-to-peer overlays. In IPTPS, 2003


_______________________________________________
P2PSIP mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/p2psip

Reply via email to