> -----Original Message----- > From: Jagdish Motwani > Sent: Monday, July 15, 2013 1:43 AM > Subject: Re: [E1000-devel] igb: cannot receive packets bigger than mtu > > On 07/13/2013 09:17 PM, Ben Greear wrote: > > On 07/13/2013 01:29 AM, jagdish.motw...@elitecore.com wrote: > >> Yes John, > >> When i do the ping request, i can see the 2 ip fragments. Its > >> the ping reply that is dropped by my igb interface. > > > > Is the replay also 2 packets? If not, the peer machine may have wrong > > MTU. > > > > Thanks, > > Ben > > > You are right Ben. Peer machine has MTU=1500 greater than my MTU=1000. > > So the reply is a single packet (which is bigger than my MTU). > > My problem is : By setting an MTU of 1000, my igb device drops all received > packet having length more than 1000 > > However e1000e allows me to receive such packets.
This has otherwise been known as a ping of death, sending a PING with a packet bigger than the MTU. Depending on the size of the packet buffers allocated, older drivers and/or chips may allocated packet buffers based on the MTU. When the device DMA copied the packet into memory, it would overwrite the end of the packet buffer. That said, there are two ways to avoid this problem. One is to make the packet buffer bigger so that the packet won't overwrite past the buffer (either by allocating more memory or chaining buffer). The other is to make the device stop the DMA once it reaches the size of the packet buffer. If the second approach is taken, memory only has an MTU's worth of data in the packet buffer. The driver may discard the packet and count it as an overrun. Or, it may pass the data up the protocol stack with the larger MTU. The igb appears to use buffer chaining, but records the original size of the packet in the buffer. I suspect that this is a feature of the device, that it doesn't expect to have to chain buffers with less than the standard Ethernet MTU, but that when a less than 1500 byte buffer is given to the device and it chains the buffer. When the device has a MTU of under 1K bytes, only 1K packet buffers are allocated. Your test of 1000 and 1200 span 1K. If you had set the MTU to 500, then 510 would likely be received, while 520 would not (ignoring the slop for packet headers, etc). The fix was to clamp the packet to the packet size. A different fix would have to not allocate receive buffers less than 2048 bytes no matter if the MTU was less than 1K bytes in igb_change_mtu. This would have wasted some memory for smaller packets, but that's an unusual case. To do this, these lines would need to be deleted in igb_change_mtu and revert the original change: if (max_frame <= IGB_RXBUFFER_256) adapter->rx_buffer_len = IGB_RXBUFFER_256; else if (max_frame <= IGB_RXBUFFER_512) adapter->rx_buffer_len = IGB_RXBUFFER_512; else if (max_frame <= IGB_RXBUFFER_1024) adapter->rx_buffer_len = IGB_RXBUFFER_1024; (and remove the "else" from the next line for 2K buffers) After all, when buffer chaining isn't working quite right, one can either disable buffer chaining and use bigger buffers, or limit input packets to less than the buffer size. The original patch did the latter. In any event, the best you should get from either of the pings is an ICMP error reporting that the packet was too large, so that the origination's MTU path discovery works. If the device just throws the long packet away, MTU discover won't work. At the end of the day, when sending a ping longer than the MTU, that ping will need to be fragmented. Regards, John Haller ------------------------------------------------------------------------------ See everything from the browser to the database with AppDynamics Get end-to-end visibility with application monitoring from AppDynamics Isolate bottlenecks and diagnose root cause in seconds. Start your free trial of AppDynamics Pro today! http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk _______________________________________________ E1000-devel mailing list E1000-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/e1000-devel To learn more about Intel® Ethernet, visit http://communities.intel.com/community/wired