Dear All,

I have two Dell PowerEdge R740 servers A and B running with ubuntu 16.04, each 
one has a Mellanox MCX556A-ECAT NIC installed on the PCIe x16 slot. And, the 
two NICs are directly connected back to back with a copper cable.


On server A, it runs a RX program which runs a function to process and analyze 
the received packets. On server B, it runs a packet-gen program which generates 
and sends packets out. They are both compiled with dpdk-17.11


Now the interesting thing is that on server B, it is kind of smart and 
considerate enough to automatically adjust its TX throughput according to how 
fast server A processes the packets. If server A processes the packets faster, 
then server B sends packets with a higher throughput. Similarly, if server A 
processes the packets slower, then server B sends packets with a lower 
throughput. Please note that once the program on server B starts, it is never 
interrupted by any way.


However, I think the server B should send out the packets with a constant 
throughput no matter how fast server A processes the packets.


Does anybody else notice this interesting behavior of MLX5 driver? Can anybody 
help me disable this feature permanently? Thanks very much for your help.


Best wishes,

Xiaoban

Reply via email to