On 12/06/2012 09:10 AM, Ben Greear wrote:
> On 12/06/2012 09:05 AM, Alexander Duyck wrote:
>> On 12/05/2012 08:30 PM, Ben Greear wrote:
>>> I'm curious if/how I can set the number of tx/rx queues (or otherwise
>>> force all
>>> pkts to be received in a single queue) in the ixgbe driver.  I'm using
>>> 3.5.7+ kernel.
>>>
>>> Now, one might ask why?
>>>
>>> I have a bridging/network-impairment module that basically grabs all
>>> packets from
>>> a network device, puts them in a single queue (can't have re-ordering),
>>> and then transmits them later on a different port.
>>>
>>> When testing at 10G speeds, the processor the impairment module runs on
>>> and the processor the NICs are bound to matters quite a bit.  I'm using
>>> an E5 processor, 4 cores, 4 hyper threads.  I disable irqbalance for the
>>> testing described here.
>>>
>>> If I bind eth2 IRQs to processor 0x10 and eth3 to 0x20, then I can
>>> get 9.8Gbps
>>> bi-directional throughput (ie, basically line speed) when the impairment
>>> module is bound to cpu 1, 2, or 3.  I only get about 7Gbps on cpu 0, for
>>> whatever reason.  But, there is a catch.  In my initial testing, I used
>>> pktgen with fixed values (ie, same source/dest addresses for every
>>> packet).
>>> I got to wondering about rx-queues, so I changed pktgen to generate
>>> millions of different destination IPs.  Under this load, the system
>>> works much harder, and can only handle about 9Gbps throughput.  Since I
>>> had pinned all irqs (all tx/rx queues) to the same processor, I didn't
>>> expect that spreading the packets among various rx queues would actually
>>> make a difference, but it does, as far as I can tell.
>>>
>>> So, to investigate this further, I want to try configuring the NIC to
>>> only
>>> use one rx queue.  It should be sufficient to just disable whatever
>>> logic
>>> the NIC uses to steer packets in one queue or another, but from what
>>> I can tell, ethtool cannot change this for the ixgbe driver.
>>>
>>> Thanks,
>>> Ben
>>>
>>
>> Ben,
>>
>> What part are you using?  If it is an 82599 you might be able to enable
>> ntuple filtering and just bind all traffic from a given protocol such as
>> tcpv4 to a specific queue.
>>
>> The only limitation is that the packets being received cannot be
>> fragmented.  If they are then the flow director filter will not filter
>> the frame.
> 
> Yes, it is 82599.  For current test we are not using fragmented packets,
> but for general purpose use, we need to be able to handle any packets. 
> Maybe
> if there are only a few packets hitting other queues it won't matter too
> much.
> 
> I did try the out-of-tree ixgbe with MQ=0,0,0,0 modprobe option.  That
> did not run too good either (about 9Gbps one direction, 7 Gbps the other),
> but I have not had any time to debug that further.
> 
> What happens to packets that flow-director does not filter?  Ie, how are
> they
> dispersed to various queues?
> 
> Thanks,
> Ben

Anything that gets past the flow-director filter is handled by the
standard queueing schemes so it would likely be handled by RSS if SR-IOV
is not enabled.

Actually now that I think about it enabling SR-IOV is probably the
easiest way to force the ixgbe driver into single queue.  You don't need
to load the ixgbevf driver, but loading the ixgbe driver with max_vfs
set will force the ixgbe driver into single queue mode.

Thanks,

Alex


------------------------------------------------------------------------------
LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial
Remotely access PCs and mobile devices and provide instant support
Improve your efficiency, and focus on delivering more value-add services
Discover what IT Professionals Know. Rescue delivers
http://p.sf.net/sfu/logmein_12329d2d
_______________________________________________
E1000-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel® Ethernet, visit 
http://communities.intel.com/community/wired

Reply via email to