On Wed, Mar 07, 2018 at 10:58:34PM +0530, Kashyap Desai wrote:
> > >
> > > Also one observation using V3 series patch. I am seeing below Affinity
> > > mapping whereas I have only 72 logical CPUs. It means we are really
> > > not going to use all reply queues.
> > > e.a If I bind fio jobs on CPU 18-20, I am seeing only one reply queue
> > > is used and that may lead to performance drop as well.
> > If the mapping is in such shape, I guess it should be quite difficult to
> figure out
> > one perfect way to solve this situation because one reply queue has to
> > IOs submitted from 4~5 CPUs at average.
> 4.15.0-rc1 kernel has below mapping - I am not sure which commit id in "
> linux_4.16-rc-host-tags-v3.2" is changing the mapping of IRQ to CPU. It
I guess the mapping you posted is read from /proc/irq/126/smp_affinity.
If yes, no any patch in linux_4.16-rc-host-tags-v3.2 should change IRQ
affinity code, which is done in irq_create_affinity_masks(), as you saw, no any
patch in linux_4.16-rc-host-tags-v3.2 touches that code.
Could you simply apply the patches in linux_4.16-rc-host-tags-v3.2 against
4.15-rc1 kernel and see any difference?
> will be really good if we can fall back to below mapping once again.
> Current repo linux_4.16-rc-host-tags-v3.2 is giving lots of random mapping
> of CPU - MSIx. And that will be problematic in performance run.
> As I posted earlier, latest repo will only allow us to use *18* reply
Looks not see this report before, could you share us how you conclude that?
The only patch changing reply queue is the following one:
But not see any issue in this patch yet, can you recover to 72 reply
queues after reverting the patch in above link?
> queue instead of *72*. Lots of performance related issue can be pop up on
> different setup due to inconsistency in CPU - MSIx mapping. BTW, changes
> in this area is intentional @" linux_4.16-rc-host-tags-v3.2". ?
As you mentioned in the following link, you didn't see big performance drop
with linux_4.16-rc-host-tags-v3.2, right?