I think there are two aspects to this policy.

First, the existing usages aren't intended to store large amounts of
data.  The SIP usage stores a dictionary of AoR->URI mappings.  It
currently doesn't specify a limit on size or allow a configuration to
specify one.  I don't know if that would be useful.  Similarly the
certificate and turn server usage store very small amounts of data.
So I don't see size as a big problem with the current usages assuming
the CA hasn't been compromised in some way, facilitating a DoS attack.

But you're right that as soon as we add a usage for voicemail, etc, we
run into this issue.  I think the big question there is what support
needs to be in the base protocol to allow usages/specific overlays to
specify size limits and what needs to be in the usages.  Possibly we
could have an extension that specifies handling for large objects that
then all usages requiring large objects require.  That would keep the
base spec simple while specifying a single way to store large objects.

To Henning's point, I think there are some really interesting
questions of what part of this need to be specified as policy on a
per-overlay basis and what needs to be in the specs.  For example, if
the protocol intends to support an overlay where peers may exist that
are unable to store all of the resources for which they are
responsible, then the protocol needs to specify how to handle that
(indirect pointer or something) even if other overlays don't need it.

Bruce


Bruce


On Fri, Nov 14, 2008 at 1:37 PM, Henning Schulzrinne
<[EMAIL PROTECTED]> wrote:
> Longer term, the limit would indeed be configured as part of an overlay
> policy, which is part of the "to do" list. I'd like implementations and
> protocols to be as liberal as possible, since the same code may run in a LAN
> with TB disks, where a gigabyte data object barely move the bandwidth meter,
> to a sensor network on long-range RF links, where anything above a few
> hundred bytes would kill the network.
>
> Henning
>
> On Nov 14, 2008, at 4:04 AM, Narayanan, Vidya wrote:
>
>>
>> When an overlay is composed of heterogeneous nodes, it is unrealistic to
>> expect all nodes to have the ability to store and serve all data.  Now, a
>> node may always reject the request to store something.  However, the
>> robustness of a peer-to-peer network is affected when a portion of the nodes
>> start running out of storage capability for critical data.  Critical data
>> is, of course, quite overlay specific.  What qualifies as critical data may
>> actually be different for different overlays.
>>
>> It is not readily possible for a storing node to tell what type of data it
>> is being requested to store - however, it may be reasonable for overlays to
>> have a guidance on the upper bound on the size of a single data item.  This
>> should be configurable for overlays and may even be advertised as part of
>> the overlay configuration.  This allows participating nodes to be composed
>> of heterogeneous capabilities and still maintain a reasonable level of
>> robustness in the overlay.  Nodes may choose to accept data items larger
>> than this upper bound - however, the data owner must be prepared for a
>> rejection of the store request when the data is larger than the bound.
>>
>> Thoughts?
>>
>> Vidya
>> _______________________________________________
>> P2PSIP mailing list
>> [EMAIL PROTECTED]
>> https://www.ietf.org/mailman/listinfo/p2psip
>>
>
> _______________________________________________
> P2PSIP mailing list
> [EMAIL PROTECTED]
> https://www.ietf.org/mailman/listinfo/p2psip
>
_______________________________________________
P2PSIP mailing list
[EMAIL PROTECTED]
https://www.ietf.org/mailman/listinfo/p2psip

Reply via email to