In Henning Brauer’s recent talk at BSDCan, one of the things he alluded to in his presentation is the bugginess of some TCP offload engines, and he also mentioned in passing that with reasonably fast CPUs you may as well just turn off all offloading. I've since found quite a lot of anecdotal evidence that disabling offloading altogether actually speeds things up on high-end servers... but very little in the way of data or guidelines.
I wonder, do you know of any heuristics to help judge (without doing extensive benchmarking, since I don't happen to have any Ixia gear handy...) when we're just as well off disabling all the offload functions? I wonder particularly about the code path to fix up tunnel packets so the igb/em/bce/bge can do their apparently-very-special cksum offload. This smells to me like hardware RAID vs. software RAID, where which one is faster (not necessarily *better*) changes depending on how fast the host CPU is and how old the RAID card is... and it was possible, at times, to come up with fairly accurate rules of thumb to judge when it was going to be faster to not offload the calculations. And, the other relevant question is whether FreeBSD does anything more exciting than checksum offloading? My recollection of Broadcom's version of TCP Chimney Offloading under Windows 2008 included a bit more than just checksums, but I don't recall the details. Thoughts? -Adam Thompson [email protected] _______________________________________________ List mailing list [email protected] http://lists.pfsense.org/mailman/listinfo/list
