On Fri, Apr 06, 2018 at 11:49:47PM +0200, Thomas Gleixner wrote:
> On Fri, 6 Apr 2018, Thomas Gleixner wrote:
>
> > On Fri, 6 Apr 2018, Ming Lei wrote:
> > >
> > > I will post V4 soon by using cpu_present_mask in the 1st stage irq spread.
> > > And it should work fine for Kashyap's case in
On Fri, 6 Apr 2018, Thomas Gleixner wrote:
> On Fri, 6 Apr 2018, Ming Lei wrote:
> >
> > I will post V4 soon by using cpu_present_mask in the 1st stage irq spread.
> > And it should work fine for Kashyap's case in normal cases.
>
> No need to resend. I've changed it already and will push it out
On Fri, 6 Apr 2018, Ming Lei wrote:
>
> I will post V4 soon by using cpu_present_mask in the 1st stage irq spread.
> And it should work fine for Kashyap's case in normal cases.
No need to resend. I've changed it already and will push it out after
lunch.
Thanks,
tglx
Hi Thomas,
On Wed, Apr 04, 2018 at 09:38:26PM +0200, Thomas Gleixner wrote:
> On Wed, 4 Apr 2018, Ming Lei wrote:
> > On Wed, Apr 04, 2018 at 10:25:16AM +0200, Thomas Gleixner wrote:
> > > In the example above:
> > >
> > > > > > irq 39, cpu list 0,4
> > > > > > irq 40, cpu list 1,6
> > >
On Wed, 4 Apr 2018, Ming Lei wrote:
> On Wed, Apr 04, 2018 at 02:45:18PM +0200, Thomas Gleixner wrote:
> > Now the 4 offline CPUs are plugged in again. These CPUs won't ever get an
> > interrupt as all interrupts stay on CPU 0-3 unless one of these CPUs is
> > unplugged. Using cpu_present_mask the
On Wed, 4 Apr 2018, Ming Lei wrote:
> On Wed, Apr 04, 2018 at 10:25:16AM +0200, Thomas Gleixner wrote:
> > In the example above:
> >
> > > > > irq 39, cpu list 0,4
> > > > > irq 40, cpu list 1,6
> > > > > irq 41, cpu list 2,5
> > > > > irq 42, cpu list 3,7
> >
> > and
On Wed, Apr 04, 2018 at 02:45:18PM +0200, Thomas Gleixner wrote:
> On Wed, 4 Apr 2018, Thomas Gleixner wrote:
> > I'm aware how that hw-queue stuff works. But that only works if the
> > spreading algorithm makes the interrupts affine to offline/not-present CPUs
> > when the block device is
On Wed, Apr 04, 2018 at 10:25:16AM +0200, Thomas Gleixner wrote:
> On Wed, 4 Apr 2018, Ming Lei wrote:
> > On Tue, Apr 03, 2018 at 03:32:21PM +0200, Thomas Gleixner wrote:
> > > On Thu, 8 Mar 2018, Ming Lei wrote:
> > > > 1) before 84676c1f21 ("genirq/affinity: assign vectors to all possible
> >
On Wed, 4 Apr 2018, Thomas Gleixner wrote:
> I'm aware how that hw-queue stuff works. But that only works if the
> spreading algorithm makes the interrupts affine to offline/not-present CPUs
> when the block device is initialized.
>
> In the example above:
>
> > > > irq 39, cpu list 0,4
On Wed, 4 Apr 2018, Ming Lei wrote:
> On Tue, Apr 03, 2018 at 03:32:21PM +0200, Thomas Gleixner wrote:
> > On Thu, 8 Mar 2018, Ming Lei wrote:
> > > 1) before 84676c1f21 ("genirq/affinity: assign vectors to all possible
> > > CPUs")
> > > irq 39, cpu list 0
> > > irq 40, cpu list 1
> > >
On Tue, Apr 03, 2018 at 03:32:21PM +0200, Thomas Gleixner wrote:
> On Thu, 8 Mar 2018, Ming Lei wrote:
> > 1) before 84676c1f21 ("genirq/affinity: assign vectors to all possible
> > CPUs")
> > irq 39, cpu list 0
> > irq 40, cpu list 1
> > irq 41, cpu list 2
> > irq 42, cpu list 3
On Thu, 8 Mar 2018, Ming Lei wrote:
> 1) before 84676c1f21 ("genirq/affinity: assign vectors to all possible CPUs")
> irq 39, cpu list 0
> irq 40, cpu list 1
> irq 41, cpu list 2
> irq 42, cpu list 3
>
> 2) after 84676c1f21 ("genirq/affinity: assign vectors to all possible
84676c1f21 ("genirq/affinity: assign vectors to all possible CPUs")
may cause irq vector assigned to all offline CPUs, and this kind of
assignment may cause much less irq vectors mapped to online CPUs, and
performance may get hurt.
For example, in a 8 cores system, 0~3 online, 4~8 offline/not
13 matches
Mail list logo