Hi, I've been running some tests on some new hardware we have acquired.
As a baseline, I ran the Hadoop sort[1] with 10GB and 100GB of data. As an experiment, I ran it on 4 systems (1 configured as master+slave and 3 as slaves) - first with an MTU of 1500 and then with an MTU of 9000.
I was somewhat surprised at the results of enabling Jumbo frames - it resulted in a slowdown. In the case of the write operations, the slow down was about 5%. In the case of 10GB sort, the slowdown was around 6% and in the case of the 100GB sort, the slowdown was nearly 20%.
Has anyone else done any testing of Hadoop with Jumbo frames? If so, have you seen similar results or is this a characteristic of my systems/network? Is there an obvious reason why a larger MTU would result in a slowdown in Hadoop?
Thanks for your thoughts, -stephen [1] http://wiki.apache.org/hadoop/Sort -- Stephen Mulcahy, DI2, Digital Enterprise Research Institute, NUI Galway, IDA Business Park, Lower Dangan, Galway, Ireland http://di2.deri.ie http://webstar.deri.ie http://sindice.com
