Hi Steven,

On 9/15/15 12:52 PM, Steven Barth wrote:
> Hello Brian,
> 
> thank you for your feedback.
> 
>> ----------------------------------------------------------------------
>> DISCUSS:
>> ----------------------------------------------------------------------
>>
>> I have no objections to the publication of this document, but I do have a
>> couple of points that I want to discuss...
>>
>> * The spec says that all TLVs are transmitted every time any value in the
>> TLV set changes. Section 1 says that a delta synchronization scheme is
>> not defined.  What is the justification for not using a delta
>> synchronization approach?  The ordering of the TLVs needed to compute the
>> hash can be done at the receiver and a delta approach would minimize
>> bandwidth consumption.  I think it would be useful to provide some
>> justification in the spec for the design decision made to not use a delta
>> synchronization approach.
> 
> Delta synchronization was mainly omitted due to our intended goal of
> focusing on optimizing for simplicity and only infrequent and potentially
> larger (in relation to the whole dataset) state changes as noted in 1.1.
> Applicability. It is therefore not in the base draft, however it should
> not be too difficult for a DNCP-based protocol that is in need of such a
> feature (e.g. due to it being "on the edge" of DNCPs applicability) to add
> it as an extension.
> 

My point above comes from this in section 1.1:

   Another consideration is the size of the published TLV set by a node
   compared to the size of deltas in the TLV set.  If the TLV set
   published by a node is very large, and has frequent small changes,
   DNCP as currently specified may be unsuitable since it does not
   define any delta synchronization scheme but always transmits the
   complete updated TLV set verbatim.

The tradeoffs here really focus on the whether those "frequent small
changes" are isolated to 1 or 2 TLVs or are spread out across all the
TLVs.  I don't think there is any concrete statement that can be made in
this document as to where the inflection point is in this decision.

It seems to me that the justification for not doing delta updates is
really just simplicity in implementation.  That is a perfectly fine
reason in my opinion.  I think the hand waving around the above and the
computation of A_NC_I really don't provide useful justification.

> 
>> * Section 4.4 says that all responses are sent unicast, even for requests
>> received via multicast over a multi-access medium. Was consideration
>> given to use multicast responses and supporting message suppression on
>> other nodes? Or, was the design decision made to ensure that all nodes
>> responded with their TLV set to the requester?  Either approach may be
>> reasonable, but there is no justification given.
> 
> There are multiple factors involved here. One important issue is that
> securing these multicast transactions is difficult and I don't even know
> if there is a standardized and deployed way to do this e.g. using (D)TLS
> that we use for unicast. This is slightly touched in 4.2. Data Transport
> and 10. Security Considerations.

Depends on what you mean by securing them.  Is there really a need to
provide confidentiality?  It seems like the necessary security functions
are integrity and authentication. Given that, multicast responses are
quite feasible.  But, that really doesn't address my question...

There are useful benefits to multicast responses, one primary one being
that everyone who receives the multicast response can update their state
for the sending node.  On the flip side, a unicast response forces all
nodes to respond and provide their TLV sets.

The problem is that these benefits seem better suited for the profile
documents that use DNCP.  I don't see the need to pick one response
method over another in this base specification.

> 
> Another reason is that on some link types - such as Wifi - multicast
> transmissions can be disadvantageous and reducing their number can be
> beneficial in many cases.
> 

Agreed, but with infrequent TLV changes, this cost is minimized.  So, I
don't see the need to specify a fixed approach in this document.

> 
>> * When responding to a multicast request over a multi-access medium, why
>> is the randomization of the transmit time only a SHOULD?  I would think
>> that needs to be a MUST.
> 
> I think this more or less depends on the type of link and its characteristics
> and the current state, so I'm not sure if a MUST is necessary in all cases.
> 

Multicast-based implosion is a problem.  If implementations ignore the
SHOULD, you run a very real risk of overloading the request sender with
unicast responses.  If the WG is not willing to make this a MUST, the
spec should clearly spell out the potential of amplification attacks
using this protocol.

> 
> 
>> ----------------------------------------------------------------------
>> COMMENT:
>> ----------------------------------------------------------------------
>>
>> 1. I think the mention of the trickle variable 'k' in section 1 is
>> gratuitous and causes confusion.
> 
> Since the meaning does not really suffer we can simply remove
> the bracket "(ideally with k < 2)" if that makes it less confusing.
> 

Makes sense.

> 
>> 2. Why does this document say (section 7) that the hash function is
>> non-cryptographic?  Shouldn't that be determined by each profile?
> 
> The intended meaning was really "not necessarily cryptographic",
> we might just remove the word "non-cryptographic" in that section if
> that makes it more apparent. 9. DNCP Profile-Specific Definitions provides
> some guidance on the choice of functions already and indicates that it can
> be either.
> 

I think dropping "non-cryptographic" is perfectly reasonable and leaves
the decision up to the profile developers.

Regards,
Brian


Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
homenet mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/homenet

Reply via email to