On Tue, May 15, 2018 at 04:51:53PM -0400, Adam Wallis wrote:
> On 5/15/2018 11:07 AM, Greg Kroah-Hartman wrote:
> > On Tue, May 15, 2018 at 09:53:57AM -0400, Adam Wallis wrote:
> > Does this really do anything?  Given the speed of USB3 at the moment,
> > does fixing the memory to the node the PCI device is on show any
> > measurable speedups?  Last I remember about NUMA systems, it wasn't
> > always a win depending on where the irq came in from, right?
> > 
> > thanks,
> > 
> > greg k-h
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-usb" in
> > the body of a message to majord...@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > 
> 
> I was getting really inconsistent throughput speeds on a system I was testing
> with NUMA nodes. Using an SMMU in identity mode, I was able to track down 
> where
> the performance deltas were coming from...Some of the rings were going to the
> "wrong" node.
> 
> Yes, it's possible to handle your IRQs with CPUs on the wrong NUMA node...but 
> I
> would argue that it's always best to have the rings for USB controller X as
> close to controller X if possible. Users can then properly constrain IRQs, and
> even kernel threads to the right Domain if they so desire.
> 
> After setting the IRQ affinity to the right node AND applying this patch, I
> started getting much more reliable (and faster) results.

Ok, fair enough, I was hoping that "modern" systems would have better
NUMA memory interconnects.  I guess that isn't the case still :(

thanks,

greg k-h
--
To unsubscribe from this list: send the line "unsubscribe linux-usb" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to