Hi Les,

1)The uniqueness of the calculated hash is an essential component for this to 
work. Given that you are using a simple XOR on a 64 bit number - and then 
"compressing" it to 32 bits for advertisement - uniqueness is NOT guaranteed. 
The danger of false positives (i.e., hashes that match when they should not) 
would compromise the solution. Can you provide more detail on the efficacy of 
the hash?


I’m sorry, you’re a bit confused here. We do NOT need uniqueness of the hash.  
In fact, one of the essential properties of all hashes is that they are not 
unique. Multiple inputs will always produce hash collisions.  This is 
necessarily true: the size of the input is larger than the size of the output. 
Information is necessarily lost.

This is already true for the Fletcher checksum that is used as part of CSNPs.

What we do want is to ensure that the hashing function is sensitive to the 
inputs. That is, for a small change in the input, there is a change in the hash 
value.

Since we are not doing security here, we do NOT care about the ability to 
compute a hash collision.

That said, I don’t think that we are particularly sensitive to the specific 
hashing function. My personal preference would be to continue to use the 
Fletcher checksum just because the code is already there in all 
implementations. One could also reasonably use CRC-16, CRC-32, etc.


2)Do we need a more sophisticated hash calculation in order to guarantee 
uniqueness? If the argument is the update process is already reliable even 
without CSNPs/HSNPs - that HSNPs are simply an optimization and don't have to 
be 100% reliable, then I think this implies that periodic CSNPs are not needed 
at all. And if the hash has a significant possibility of being non-unique, 
relying on HSNPs during adjacency bringup might actually be a hindrance, not a 
help.


Periodic CSNPs are not needed.  A periodic HSNP is sufficient, and if there are 
inconsistencies, then they will devolve into CSNPs to isolate the exact portion 
of the database that is inconsistent.  We intentionally re-use the CSNP and 
PSNP mechanisms as we saw no point in re-inventing them.


3)I would like to raise the question as to whether we should prioritize a 
solution that aids initial LSPDB sync on adjacency bringup over a solution 
which works well after LSPDB synchronization (periodic CSNPs).


Our solution works well in both cases.  In the case of initial bringup, our 
mechanism exchanges a logarithmic number of packets to isolate the exact LSPs 
that are inconsistent.  In the case where databases are already synchronized, 
this means that only a single top-level HSNP is required.

This is also true in the case of continuing verification of synchronized 
databases.


The need for periodic CSNPs arose from early attempts at flooding optimizations 
(mesh groups) where an error in the manual configuration could jeopardize the 
reliability of the Update Process. In deployments where standards based 
flooding optimizations are used, the need for periodic CSNPs is lessened as the 
standards based solution should be well tested. Periodic CSNPs becomes the 
"suspenders" in a "belt" based deployment (or if you prefer the "belt" in a 
"suspenders" based deployment). I am wondering if we should deemphasize the use 
of periodic CSNPs?  In any case, the size of a full CSNP set is a practical 
issue in scale deployments - especially where a node has a large number of 
neighbors. Sending the full CSNP set on adjacency UP is a necessary step and 
therefore I would like to see this use case get greater attention over the 
optional periodic CSNP case.


SInce this now reduces to sending a single top level HSNP, and I like having a 
belt and suspenders (figuratively), things are already much cheaper and I would 
favor retaining that.


4)You choose to define new PDUs - which is certainly a viable option. But I am 
wondering if you considered simply defining a new TLV to be included in 
existing xSNPs. I can imagine cases - especially in PSNP usage - where a 
mixture of existing LSP entries and new Merkle Hash entries could usefully be 
sent in a PSNP to request/ack LSPs as we do today. The use of the hash TLV in 
PSNPs could add some efficiency to LSP acknowledgments.


We chose to go to new PDUs to not risk interoperability problems. We could 
easily see outselves wanting to generate packets that only include HSNP 
information and no legacy CSNP/PSNP information.


5)The choice of ranges for the new TLVs depends upon the current state of the 
LSPDB on the sending node. The definitions you have seem targeted at "periodic 
CSNPs" where it is reasonable to expect that both neighbors have (nearly) the 
same LSPDB contents. However, in the case of adjacency bringup, it is likely 
that there are significant differences in the current content of the LSPDBs on 
the neighbor - which will make it far more likely that the ranges of nodes 
chosen in each hash entry will differ between the neighbors - making the 
strategy less useful for this case.


I don’t see anything ‘less useful’ about this case. If there are discrepancies, 
then they are resolved in an efficient manner. Any subsets of the database that 
are in sync are very efficiently confirmed by higher layers.


6)You do not discuss the use of HSNPs on LANs. It would seem intuitive that 
HSNPs could only be used when all neighbors on the LAN support it. But some 
discussion of LANs would be desirable.

Agreed.  Given the decreasing usage of actual LAN situations, I think that this 
is not a significant concern.

T

_______________________________________________
Lsr mailing list -- lsr@ietf.org
To unsubscribe send an email to lsr-le...@ietf.org

Reply via email to