yes, we could use a 32 bit fletcher for the fragment hash although it's not
much data it runs over. I can look fairly easily at perf
implications compared to e.g. the relatively simple double hashing I've
shown above

On Wed, Oct 8, 2025 at 9:21 PM Tony Li <[email protected]> wrote:

>
> Robert,
>
> Yes, we can introduce any hashing function that we would like.  However,
> implementation complexity and standardization strongly suggest that we pick
> one.  My personal preference is to use the same Fletcher checksum that
> IS-IS already uses, as that’s the least amount of new code, but I don’t
> know if that would match everyone’s reliability criteria.
>
> Using multiple algorithms on the fly would be expensive to implement and
> darn tricky to debug. How do you deal with LANs?  However, maybe the same
> effect could be achieved by varying a seed for the hash function on a
> per-link basis.
>
> T
>
>
> On Oct 8, 2025, at 11:20 AM, Robert Raszuk - robert at raszuk.net <
> [email protected]> wrote:
>
> Hi Tony,
>
> I was under the assumption that we are free to introduce a new hashing
> function if really needed (as fallback for subset of mismatched LSPs would
> still be there so not sure if this is needed).
>
> Maybe this fallback needs to be said in bold in the document ...
>
> Thx,
> R.
>
> On Wed, Oct 8, 2025 at 6:47 PM Tony Li <[email protected]> wrote:
>
>>
>> Hi Robert,
>>
>> Discussions about hashing functions are certainly welcome.
>>
>>
>> And why not simply use the same hash function just increase its size, say
>> to 256 bits ?
>>
>>
>>
>> You can’t always do that.  For example CRC-16 is defined to produce a 16
>> bit result.  You can’t just cause it to produce 32 bits.  You can shift to
>> CRC-32, but that’s a different hashing function.
>>
>>
>> Isn't it true that the probability of collision significantly (or even
>> exponentially) decreases when hash size grows ?
>>
>>
>>
>> That’s true, assuming good hashing functions.  Not true for bad functions.
>>
>>
>> Besides as draft says there is still fallback to scoped lower level check
>> hence I am not sure if there is any issue with the proposal.
>>
>>
>>
>>
>> *And ideally, a hash mismatch should produce not more than a single
>> packet   or two with lower level checksums or CSNPs to optimize
>> re-convergence   while minimizing amount of packets exchanged.*
>>
>>
>>
>> Thanks,
>> T
>>
>>
> _______________________________________________
> Lsr mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
>
_______________________________________________
Lsr mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to