So you never really answered my question.  Why can't you use
/proc/irq/<irq num>/smp_affinity?

You should be able to determine the IRQs that belong to the device by
just checking in /proc/interrupts and from there you could go through
and program the SMP affinity for the interrupts directly.  Why would
you need to go through the ethtool interface?

- Alex

On Wed, Mar 15, 2017 at 1:36 PM, Tal Abudi <talab...@gmail.com> wrote:
> Hi
> Thanks for the reply.
> I control the smp affinity in userspace but I want to improve performance on
> very high performance oriented machines.
> I have several CPU (sockets) and it's important to create the queue vectors
> "near" the memory of the socket of the CPU affinity.
> the affinity_hint doesn't quite do the trick there.
>
> I may use drivers with less queues than my cores count (like igb) and the
> machines can even have 80 cores. so changing the weights is a good solution
> but won't allways do the trick.
>
> I generally use one queue affinity per core so no need to mask. but to make
> my solution more general I'm willing to pass the cpu mask.
> For example,
> ethtool –set-cpu-mask <device> <queue> <cpu-mask>  #to set the affinity hint
> for the first 4
> ethtool –set-cpu-mask eth1 0 1
> ethtool –set-cpu-mask eth1 1 2
> ethtool –set-cpu-mask eth1 2 4
>
> Thanks,
> Tal Abudi
>
> On Sun, Mar 5, 2017 at 8:06 PM, Alexander Duyck <alexander.du...@gmail.com>
> wrote:
>>
>> On Sun, Mar 5, 2017 at 8:47 AM, Tal Abudi <talab...@gmail.com> wrote:
>> > Hi All
>> >
>> > I’m looking for a nice way to pass parameters (list of integers) to each
>> > ixgbe interface.
>> >
>> > Creating /sys entries seems too complex and so I thought about ethtool.
>>
>> Why mess around with /sys when there is already
>> /proc/irq/*/smp_affinity?  It seems a bit convoluted to try to add an
>> interface to set something that is then used by a user-space daemon to
>> set something else that is already writable.
>>
>> > How can I pass private data (a list) to each nic with minimal driver
>> > code
>> > change ?
>>
>> What you are talking about would be really invasive.  In addition
>> integers assume a single CPU mapping, or fewer than 32 CPUs.  In most
>> cases all affinities use a bitmask that is nr_cpus in size.
>>
>> > To clarify,  I want to use a customized affinity hint with multi queue.
>> >
>> > For example, on a machine with 12 cores and RSS of 4 I want to set the
>> > RX/TX queues to CPUs 8-11, It would look something like:
>> >
>> > ethtool –set-cpu eth0 8 9 10 11  #to set the affinity hint for the first
>> > 4
>> > queues to cpus 8-11
>> >
>> > ethtool –set-channels eth0 combined 4 #Recreate 4 Rx/Tx vectors and use
>> > the
>> > new and customized affinity hint.
>> >
>> > <Set the regular RX IRQ affinity to the same CPUs>
>>
>> I have what I believe would be a better suggestion.
>>
>> You could leave all 12 queues enabled, and should be able to limit RSS
>> to 4 queues with ethtool -X eth0 equal 4, or if you really want to
>> just restrict RSS to the last 4 queues you could do that by messing
>> with the weights and just restricting the first 8 weights to 0, and
>> the last 4 to 1.  The only downside is non-RSS flows are still
>> defaulted to queue 0.
>>
>> If you want to shuffle the queues around then it is just a matter of
>> reprogramming the XPS maps "/sys/class/net/eth0/queues/tx-*/xps_cpus"
>> and IRQ affinities so you could shift the maps around by 4 CPUs.  That
>> way queues 4-12 are serviced by CPUs 0-7, and queues 0-3 would be
>> serviced by CPUs 8-11.
>
>
>
>
> --
> Best regards,
> Tal Abudi

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit 
http://communities.intel.com/community/wired

Reply via email to