Michael Thanks!
> No. An entire IP packet (after defragmentation) is given to to the TCP > stack. The stack then looks at the header to find out which 'socket' to > put the payload into, and then pushes the payload onto the end of the > socket buffer. Doesn't the IP stack peel off it's own IP header before giving the IP packet payload to the TCP stack? Otherwise, TCP stack would have to know about IP details which violates encapsulation right? In short, every layer of network stack peel off *it's own* header as packet travels up to application layer IIRC. > Once this data gets to the > TCP stack, the TCP stack only cares about everything after the header in > the buffer. It doesn't care how big it is. I'm still not clear how come TCP stack doesn't need to care how big each TCP segment is. Sure it ultimately only cares about joining all segments in correct order but how can it do *that* without knowing where each segment begins and ends!?!?! > it still appears that the UDP length > > field is redundant and unnecessary. > > No, that would assume that UDP has to run over IP, which it doesn't What about TCP and UDP's usage of a "pseudo IP header" when calculating their checksums? Those "pseudo IP headers" contain the sender and receiver's IP addresses!? The TCP and UDP checksum fields therefore seems to lock-in TCP and UDP to use only IP!?!? (Very Microsoftish :) Chris
signature.asc
Description: This is a digitally signed message part
-- [email protected] http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-list
