[LES:] Let’s use a very simple example.

A and B are neighbors
For LSPs originated by Node C here is the current state of the LSPDB:

A has (C.00-00(Seq 10), C.00-01(Seq 8), C-00.02(Seq 7) Merkle hash: 0xABCD
B has (C.00-00(Seq 10), C.00-01(Seq 9), C-00.02(Seq 6) Merkle hash: 0xABCD
(unlikely that the hashes match -  but possible)

When A and B exchange hash TLVs they will think they have the same set of LSPs 
originated by C even though they don’t.
They would clear any SRM bits currently set to send updated LSPs received from 
C on the interface connecting A-B.
We have just broken the reliability of the update process.

The analogy of the use of fletcher checksum on PDU contents is not a good one. 
The checksum allows a receiver to determine whether any bit errors occurred in 
the transmission. If a bit error occurs and is undetected by the checksum, that 
is bad – but it just means that a few bits in the data are wrong – not that we 
are missing the entire LSP.

I appreciate there is no magic here – but I think we can easily agree that 
improving scalability at the expense of reliability is not a tradeoff we can 
accept.

well, we already have this problem today as I described, the more stuff the 
hash/checksum covers the more likely it becomes of course that caches collide. 
only way to be better here is to distribute bigger or more caches/checksums. 
And shifted XORs are actually som,e of the best "entropy generators" based on 
work done on MAC hashes for SPT AFAIR
[LES2:] We don’t have the same problem today.
SNP entry (as you documented) has: (LSP ID. Fragment + Seq# + CSUM + Lifetime)
If I have: A.00-00 (Seq #10) Chksum: 0xABC
You have: A.00-00 (Seq #11) Chksum: 0xABC

The two SNP entries are different and update process will guarantee that Seq 
#11 gets flooded reliably.
But when we introduce range, we no longer have complete info about each LSP in 
the range, so duplicate hash can compromise flooding reliability.

Make sense??

   Les

_______________________________________________
Lsr mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to