Hi,

Ted Unangst wrote:
...
good luck communicating with other tcp devices after you change your
checksum to md5. the point is to be fast and catch some errors. also, type end-to-end into google.


thanks for the interesting paper. I now understand why it makes sense to use a checksum at link layer which catches only "most" errors, because not all applications require full protection against random errors. I also understand that error detection/error correction is always a performance tradeoff, which also depends on the reliability requirements and the latency of the connection.

As you know, TCP has been adapted to changing requirements in the past via TCP options, which also provide a fallback mechanism. RFC 1146 is about alternate TCP checksums (I don't know how good they are), but I've found no clues about actual implementations of them. Please tell me, did I just search at the wrong places?

2) no - so why not skip TCP checksum calculation at all? (at least for
incoming seqments this wouldn't break a thing besides the RFC itself).


because then you don't detect errors.


That's exactly my point. My basic assumption was that the TCP checksum doesn't provide enough protection against random errors. By googling for 'crc tcp checksum disagree' I've found a paper which seems to confirm this.

The tcp(4) man page says "The TCP protocol provides a reliable, flow-controlled, two-way transmission of data." It doesn't say "The TCP protocol provides a reliable, ..., only if shit doesn't happen".

As much better algorithms for error detection are known and PC performance (and also Internet traffic) has increased a lot since the introduction of TCP - do you think that the original checksum algorithm is still the best choice in terms of a reliability/performance tradeoff?

regards,
Andreas

Reply via email to