Moore, Joe writes: > James Carlson wrote: > > That's true, except that this isn't so simple. The "optimal" MTU to > > use also has to take into account the attached hardware and the peers. > > At least for Ethernet, the MTU must be set the same on all systems on > > a given subnetwork. That's inescapable. > > You're saying that the MTU must be the same for everythin on the subnet? I'm > not a network guru, but it seems to me that there's some wiggle room in there.
Not for IEEE 802. > All listeners on the network must accept a packet up to the MTU of the system > sending them data. They don't care about anything else happening on the > network. That's true. You can usually configure your own IP-layer MTU (or better yet a transport layer segment size) downwards and send smaller packets if that's what you want to do. Setting the MAC-layer MTU differently is hazardous in several cases. If you do any bridging, it's obviously a non-starter; bridge links must have an MTU at least as large as the largest packet you'll ever see, or you'll have black holes in your network. If you have interfaces that treat MTU as though it were MRU as well (an apparently common situation), then setting the MTU smaller on the physical (MAC-layer) interface will produce exactly the sort of broken behavior that you were excluding. Thus, it's something to be careful about and that depends on internal (and usually undocumented) device driver design issues. If you use routing protocols such as IS-IS that depend on the MAC's MTU, you may end up with surprising results as well. > If there is a driver- and hardware- optimized "Max Transmit TU" and the > subnetwork has a different (but bigger) MTU, wouldn't it make sense to split > out those two tunables? Default MTTU == MTU, but can be tweaked at the > driver layer (for example in driver.conf) > > Or would that be too many network tunables? I think it is too many. And worse, it's just too vague. Optimal in what sense and for whom? Is it really true that "all" applications benefit from using exactly that size, or do only "certain" applications benefit, and if so, which ones? Does it matter what is "optimal" for the peer you're talking to, or are local DMA optimizations the only things that matter in the world? Does "optimal" perhaps depend on other factors, such as the use (or non-use) of IP options, v4 versus v6, or other offload-hampering issues? One possibility is having the driver export properties that various applications and/or transport layers can read and determine what they'll do to optimize their behavior. That way, it's not wired into something as side-effect laden and hard to get right as MTU, and it's presented in a way that allows us to do the Right Thing over time (which I think is to adopt the adaptive behavior that Nico described). -- James Carlson, Solaris Networking <[email protected]> Sun Microsystems / 35 Network Drive 71.232W Vox +1 781 442 2084 MS UBUR02-212 / Burlington MA 01803-2757 42.496N Fax +1 781 442 1677 _______________________________________________ networking-discuss mailing list [email protected]
