Longer term, the limit would indeed be configured as part of an overlay policy, which is part of the "to do" list. I'd like implementations and protocols to be as liberal as possible, since the same code may run in a LAN with TB disks, where a gigabyte data object barely move the bandwidth meter, to a sensor network on long-range RF links, where anything above a few hundred bytes would kill the network.

Henning

On Nov 14, 2008, at 4:04 AM, Narayanan, Vidya wrote:


When an overlay is composed of heterogeneous nodes, it is unrealistic to expect all nodes to have the ability to store and serve all data. Now, a node may always reject the request to store something. However, the robustness of a peer-to-peer network is affected when a portion of the nodes start running out of storage capability for critical data. Critical data is, of course, quite overlay specific. What qualifies as critical data may actually be different for different overlays.

It is not readily possible for a storing node to tell what type of data it is being requested to store - however, it may be reasonable for overlays to have a guidance on the upper bound on the size of a single data item. This should be configurable for overlays and may even be advertised as part of the overlay configuration. This allows participating nodes to be composed of heterogeneous capabilities and still maintain a reasonable level of robustness in the overlay. Nodes may choose to accept data items larger than this upper bound - however, the data owner must be prepared for a rejection of the store request when the data is larger than the bound.

Thoughts?

Vidya
_______________________________________________
P2PSIP mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/p2psip


_______________________________________________
P2PSIP mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/p2psip

Reply via email to