>At 11:31 PM 2/8/01, Jeremy Dumoit wrote:
>
>> I think I'm unclear on some of the protocols here... for what purpose
>>would a protocol detect errors, but not correct them?
>
>A protocol detects errors so it can throw a bad frame out rather than pass
>it to the next layer up. Most data-link-layer protocols have a CRC that
>does error detection. The sender adds up all the bits and does some bizarre
>calculation on them. The sender places the result in the CRC field of the
>frame. The receiver does the exact same algorithm. If the result is
>different than the CRC in the frame, the recipient throws out the frame.
In modern implementations, the data link protocols have a frame
checking sequence (a somewhat broader term than CRC), which is
implemented in hardware and generates a 32-bit checksum. In
contrast, IP and TCP use the simple Fletcher algorithm, and a much
smaller field, so they don't have the same error detection (or even
correction) power.
Flashing back to the late seventies, I was in a US government
standards meeting that was working on ADDCP, the ANSI predecessor of
HDLC. One of the decisions was how long to make the checksum -- 16,
32, or 64 bits. There was a lot of interest in 16 bits rather than
32, but 32 was the consensus. It was agreed that 64 bits would
improve things a bit.
At one point in the discussion, after one of the military people had
said 32 bits was enough for Emergency Action Messages -- better known
as nuclear launch orders -- I observed that the incremental
error-detection difference betweeen 32 and 64 bits appeared to be the
acceptable risk of accidental nuclear war. People babbled a bit and
said...well...that's not EXACTLY what we meant.
As I believe Disraeli said, there are lies, damned lies, and statistics.
_________________________________
FAQ, list archives, and subscription info: http://www.groupstudy.com/list/cisco.html
Report misconduct and Nondisclosure violations to [EMAIL PROTECTED]