There is another angle to this point to think about and that is whether a given overlay can actually support a given volume of data. Apart from real storage problems (e.g. in smaller devices), overlays are also limited by the bandwidth requirements to move data around due to churn even if every node has a terabyte disk on it. The overlay size restriction could also be used to limit data entry into the overlay based on the observed churn characteristics of the specific overlay.
This paper (High availability, scalable storage, dynamic peer neetworks: pick two, proc. of HOTOS 2003) argues that cross sytem bandwidth is the main limiting factor in the dynamic overlays that we expect to form. It shows that the supported storage of a gnutella overlay is a small fraction of the real storage capaiblities of the nodes in that overlay due to cross sytem bandwidth constraints in maintaining the stored data. Thanks, Saumitra -----Original Message----- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Henning Schulzrinne Sent: Friday, November 14, 2008 10:37 AM To: Narayanan, Vidya Cc: [email protected] Subject: Re: [P2PSIP] Should RELOAD impose a size restriction for storage? Longer term, the limit would indeed be configured as part of an overlay policy, which is part of the "to do" list. I'd like implementations and protocols to be as liberal as possible, since the same code may run in a LAN with TB disks, where a gigabyte data object barely move the bandwidth meter, to a sensor network on long-range RF links, where anything above a few hundred bytes would kill the network. Henning On Nov 14, 2008, at 4:04 AM, Narayanan, Vidya wrote: > > When an overlay is composed of heterogeneous nodes, it is > unrealistic to expect all nodes to have the ability to store and > serve all data. Now, a node may always reject the request to store > something. However, the robustness of a peer-to-peer network is > affected when a portion of the nodes start running out of storage > capability for critical data. Critical data is, of course, quite > overlay specific. What qualifies as critical data may actually be > different for different overlays. > > It is not readily possible for a storing node to tell what type of > data it is being requested to store - however, it may be reasonable > for overlays to have a guidance on the upper bound on the size of a > single data item. This should be configurable for overlays and may > even be advertised as part of the overlay configuration. This > allows participating nodes to be composed of heterogeneous > capabilities and still maintain a reasonable level of robustness in > the overlay. Nodes may choose to accept data items larger than this > upper bound - however, the data owner must be prepared for a > rejection of the store request when the data is larger than the bound. > > Thoughts? > > Vidya > _______________________________________________ > P2PSIP mailing list > [email protected] > https://www.ietf.org/mailman/listinfo/p2psip > _______________________________________________ P2PSIP mailing list [email protected] https://www.ietf.org/mailman/listinfo/p2psip _______________________________________________ P2PSIP mailing list [email protected] https://www.ietf.org/mailman/listinfo/p2psip
