Eric,
    I do think it is the IO_MMMU reason that brought such performance
down, while TSO/TSQ are not the way to cure such wounded performance.
I might misunderstand the purpose of TSO/TSQ---get rid of the
extra cache flushing pain of mapping and unmapping operation done by
Intel IOMMU. (sorry for last HTML mail, resend txt )

Ethan

On Tue, Nov 12, 2013 at 9:34 AM, Eric Dumazet <eric.duma...@gmail.com> wrote:
> On Tue, 2013-11-12 at 09:03 +0800, Ethan Zhao wrote:
>> Eric,
>>     We have tested the performance with the TSO and TSQ patches
>> merged, the result not good, even worse than kernel without those two
>> patches. any idea ?
>>
>> kernel   : 3.11.x with TSO & TSQ merged.  ( CONFIG_INTEL_IOMMU_DEFAULT_ON=y )
>> Network Interface : eth4
>> Network driver    : be2net
>>
>> Average Bandwidth for :
>>    1.tcp-unidirectional test    : 4385 Mbits/sec
>>    2.tcp-unidirectional-parallel: 9383 Mbits/sec
>>    3.tcp-bidirectonal test      : 2755 Mbits/sec
>>
>> vs
>>
>> kernel   :  3.11.x without TSO & TSQ patches.
>> (CONFIG_INTEL_IOMMU_DEFAULT_ON is not set)
>> Network Interface : eth4
>> Network driver    : be2net
>>
>> Average Bandwidth for :
>>    1.tcp-unidirectional test    : 7992 Mbits/sec
>>    2.tcp-unidirectional-parallel: 9403 Mbits/sec
>>    3.tcp-bidirectonal test      : 5802 Mbits/sec
>>
>
> So it seems its not the TSO/TSQ changes, but
> CONFIG_INTEL_IOMMU_DEFAULT_ON being on instead of off.
>
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to