I still maintain that for arbitrary traffic, you cannot know the
"optimal" MTU, because you don't know what the overheads are. For
protocols with a very high per-packet cost, the DMA overhead of larger
packet might be in the noise, to the point that 9K is even better on the
nxge configuration you've proposed.
I think the right answer is to document this information in the man page
somewhere like this:
"PERFORMANCE NOTE: For nxge devices on xxx systems (fill in the xxx),
the DMA engine will perform optimally with an MTU of 8150 bytes. For
some traffic patterns, it may be better to use this value for the MTU
rather than a larger 9K MTU. Note, however, that all hosts on a giiven
network segment must be configured for the same MTU, so changes must be
made with care. A full discussion of large MTU configuration issues can
be found ... <fill in a reference to the appropriate administration guide>"
I'm vehemently opposed to adding software hooks to express this
particular issue (which is really a hardware bug, IMO) to users --
especially since we cannot always say that reducing the MTU will always
give better performance. It depends on too many variables not in
control of the driver. (Including the costs of the upper layer
protocols, and the various efficiencies or inefficiencies of other peers
on the network.)
- Garrett
Girish Moodalbail wrote:
Folks,
While approving a recent PSARC case (2009/235), a question popped up
with regard to 'optimum MTU value' for a driver that supports Jumbo
frames and I would like to revive that discussion here in the
networking community.
Basic point is:
Configuring jumbo frames is very common among customers. However for
some drivers which support a range of MTU values, the maximum MTU
value might not be the 'optimum MTU' value (because of the way the
driver is implemented). For example, on Neptune (nxge), the maximum is
9000, but the optimal large MTU is 8150, because of the size of the
DMA transfers the card does in hardware. In short, the optimal large
transfer size is a function of the hardware and the driver
So how do we 'publish' such values?
One school of thought was to document it in performance tunning guide,
manpage for that driver, blogs, whitepaper et al. Issue with this is
the information often gets 'out-dated' (for eg: optimal MTU value
itself changes for a driver) and googling around for such tunables
would take lot of time.
A different school of thought was for each driver to provide such
information through a read-only property, which will be displayed by
'dladm show-linkprop' in a new column or as part of possible values
list, itself.
The more basic issue with both the 'thoughts' above is, MTU is not
only an end-host issue but also a network-wide configuration issue.
That is the administrator must choose an MTU value not only based on
local hardware configuration but also based on the capabilities and
configuration of all the other nodes on his/her L2 network. So how
would specifying one preferred MTU value do good, overall?
thoughts?
thanks
~Girish
_______________________________________________
networking-discuss mailing list
[email protected]
_______________________________________________
networking-discuss mailing list
[email protected]