On 4/14/21 12:11 PM, Jesse Brandeburg wrote:
> Nitesh Narayan Lal wrote:
>
>>> The original issue as seen, was that if you rmmod/insmod a driver
>>> *without* irqbalance running, the default irq mask is -1, which means
>>> any CPU. The older kernels (this issue was patched in 2014) used to use
Nitesh Narayan Lal wrote:
> > The original issue as seen, was that if you rmmod/insmod a driver
> > *without* irqbalance running, the default irq mask is -1, which means
> > any CPU. The older kernels (this issue was patched in 2014) used to use
> > that affinity mask, but the value programmed
On 4/7/21 11:18 AM, Nitesh Narayan Lal wrote:
> On 4/6/21 1:22 PM, Jesse Brandeburg wrote:
>> Continuing a thread from a bit ago...
>>
>> Nitesh Narayan Lal wrote:
>>
After a little more digging, I found out why cpumask_local_spread change
affects the general/initial smp_affinity for
On 4/6/21 1:22 PM, Jesse Brandeburg wrote:
> Continuing a thread from a bit ago...
>
> Nitesh Narayan Lal wrote:
>
>>> After a little more digging, I found out why cpumask_local_spread change
>>> affects the general/initial smp_affinity for certain device IRQs.
>>>
>>> After the introduction of
Continuing a thread from a bit ago...
Nitesh Narayan Lal wrote:
> > After a little more digging, I found out why cpumask_local_spread change
> > affects the general/initial smp_affinity for certain device IRQs.
> >
> > After the introduction of the commit:
> >
> > e2e64a932 genirq: Set
On 3/4/21 4:13 PM, Alex Belits wrote:
> On 3/4/21 10:15, Nitesh Narayan Lal wrote:
>> External Email
>>
>> --
>>
>> On 2/11/21 10:55 AM, Nitesh Narayan Lal wrote:
>>> On 2/6/21 7:43 PM, Nitesh Narayan Lal wrote:
On 2/5/21
On 2/11/21 10:55 AM, Nitesh Narayan Lal wrote:
> On 2/6/21 7:43 PM, Nitesh Narayan Lal wrote:
>> On 2/5/21 5:23 PM, Thomas Gleixner wrote:
>>> On Thu, Feb 04 2021 at 14:17, Nitesh Narayan Lal wrote:
On 2/4/21 2:06 PM, Marcelo Tosatti wrote:
>>> How about adding a new flag for isolcpus
On 2/6/21 7:43 PM, Nitesh Narayan Lal wrote:
> On 2/5/21 5:23 PM, Thomas Gleixner wrote:
>> On Thu, Feb 04 2021 at 14:17, Nitesh Narayan Lal wrote:
>>> On 2/4/21 2:06 PM, Marcelo Tosatti wrote:
>> How about adding a new flag for isolcpus instead?
>>
> Do you mean a flag based on
On 2/5/21 5:23 PM, Thomas Gleixner wrote:
> On Thu, Feb 04 2021 at 14:17, Nitesh Narayan Lal wrote:
>> On 2/4/21 2:06 PM, Marcelo Tosatti wrote:
> How about adding a new flag for isolcpus instead?
>
Do you mean a flag based on which we can switch the affinity mask to
On Fri, Feb 05 2021 at 23:23, Thomas Gleixner wrote:
> On Thu, Feb 04 2021 at 14:17, Nitesh Narayan Lal wrote:
>> On 2/4/21 2:06 PM, Marcelo Tosatti wrote:
> How about adding a new flag for isolcpus instead?
>
Do you mean a flag based on which we can switch the affinity mask to
On Thu, Feb 04 2021 at 14:17, Nitesh Narayan Lal wrote:
> On 2/4/21 2:06 PM, Marcelo Tosatti wrote:
How about adding a new flag for isolcpus instead?
>>> Do you mean a flag based on which we can switch the affinity mask to
>>> housekeeping for all the devices at the time of IRQ
On Fri, Jan 29 2021 at 09:35, Nitesh Narayan Lal wrote:
> On 1/29/21 9:23 AM, Marcelo Tosatti wrote:
>>> I am not sure about the PCI patch as I don't think we can control that from
>>> the userspace or maybe I am wrong?
>> You mean "lib: Restrict cpumask_local_spread to housekeeping CPUs" ?
>
>
On 2/4/21 2:06 PM, Marcelo Tosatti wrote:
> On Thu, Feb 04, 2021 at 01:47:38PM -0500, Nitesh Narayan Lal wrote:
[...]
> Nitesh, is there anything preventing this from being fixed
> in userspace ? (as Thomas suggested previously).
Everything with is not managed can be steered by
On Thu, Feb 04, 2021 at 01:47:38PM -0500, Nitesh Narayan Lal wrote:
>
> On 2/4/21 1:15 PM, Marcelo Tosatti wrote:
> > On Thu, Jan 28, 2021 at 09:01:37PM +0100, Thomas Gleixner wrote:
> >> On Thu, Jan 28 2021 at 13:59, Marcelo Tosatti wrote:
> The whole pile wants to be reverted. It's simply
On 2/4/21 1:15 PM, Marcelo Tosatti wrote:
> On Thu, Jan 28, 2021 at 09:01:37PM +0100, Thomas Gleixner wrote:
>> On Thu, Jan 28 2021 at 13:59, Marcelo Tosatti wrote:
The whole pile wants to be reverted. It's simply broken in several ways.
>>> I was asking for your comments on interaction
On Thu, Jan 28, 2021 at 09:01:37PM +0100, Thomas Gleixner wrote:
> On Thu, Jan 28 2021 at 13:59, Marcelo Tosatti wrote:
> >> The whole pile wants to be reverted. It's simply broken in several ways.
> >
> > I was asking for your comments on interaction with CPU hotplug :-)
>
> Which I answered in
On Fri, Jan 29, 2021 at 07:41:27AM -0800, Alex Belits wrote:
> On 1/28/21 07:56, Thomas Gleixner wrote:
> > External Email
> >
> > --
> > On Wed, Jan 27 2021 at 10:09, Marcelo Tosatti wrote:
> > > On Wed, Jan 27, 2021 at
On 1/29/21 06:23, Marcelo Tosatti wrote:
External Email
--
On Fri, Jan 29, 2021 at 08:55:20AM -0500, Nitesh Narayan Lal wrote:
On 1/28/21 3:01 PM, Thomas Gleixner wrote:
On Thu, Jan 28 2021 at 13:59, Marcelo Tosatti wrote:
On Thu, Jan 28 2021 at 13:59, Marcelo Tosatti wrote:
>> The whole pile wants to be reverted. It's simply broken in several ways.
>
> I was asking for your comments on interaction with CPU hotplug :-)
Which I answered in an seperate mail :)
> So housekeeping_cpumask has multiple meanings. In this
On 1/28/21 11:59 AM, Marcelo Tosatti wrote:
> On Thu, Jan 28, 2021 at 05:02:41PM +0100, Thomas Gleixner wrote:
>> On Wed, Jan 27 2021 at 09:19, Marcelo Tosatti wrote:
>>> On Wed, Jan 27, 2021 at 11:57:16AM +, Robin Murphy wrote:
> + hk_flags = HK_FLAG_DOMAIN | HK_FLAG_MANAGED_IRQ;
>
On Thu, Jan 28, 2021 at 04:56:07PM +0100, Thomas Gleixner wrote:
> On Wed, Jan 27 2021 at 10:09, Marcelo Tosatti wrote:
> > On Wed, Jan 27, 2021 at 12:36:30PM +, Robin Murphy wrote:
> >> > > >/**
> >> > > > * cpumask_next - get the next cpu in a cpumask
> >> > > > @@ -205,22 +206,27 @@
On Thu, Jan 28, 2021 at 05:02:41PM +0100, Thomas Gleixner wrote:
> On Wed, Jan 27 2021 at 09:19, Marcelo Tosatti wrote:
> > On Wed, Jan 27, 2021 at 11:57:16AM +, Robin Murphy wrote:
> >> > +hk_flags = HK_FLAG_DOMAIN | HK_FLAG_MANAGED_IRQ;
> >> > +mask =
On Wed, Jan 27 2021 at 09:19, Marcelo Tosatti wrote:
> On Wed, Jan 27, 2021 at 11:57:16AM +, Robin Murphy wrote:
>> > + hk_flags = HK_FLAG_DOMAIN | HK_FLAG_MANAGED_IRQ;
>> > + mask = housekeeping_cpumask(hk_flags);
>>
>> AFAICS, this generally resolves to something based on
On Wed, Jan 27 2021 at 10:09, Marcelo Tosatti wrote:
> On Wed, Jan 27, 2021 at 12:36:30PM +, Robin Murphy wrote:
>> > > >/**
>> > > > * cpumask_next - get the next cpu in a cpumask
>> > > > @@ -205,22 +206,27 @@ void __init
>> > > > free_bootmem_cpumask_var(cpumask_var_t mask)
>> > >
On 1/27/21 8:09 AM, Marcelo Tosatti wrote:
> On Wed, Jan 27, 2021 at 12:36:30PM +, Robin Murphy wrote:
>> On 2021-01-27 12:19, Marcelo Tosatti wrote:
>>> On Wed, Jan 27, 2021 at 11:57:16AM +, Robin Murphy wrote:
Hi,
On 2020-06-25 23:34, Nitesh Narayan Lal wrote:
>
On 2021-01-27 13:09, Marcelo Tosatti wrote:
On Wed, Jan 27, 2021 at 12:36:30PM +, Robin Murphy wrote:
On 2021-01-27 12:19, Marcelo Tosatti wrote:
On Wed, Jan 27, 2021 at 11:57:16AM +, Robin Murphy wrote:
Hi,
On 2020-06-25 23:34, Nitesh Narayan Lal wrote:
From: Alex Belits
The
On Wed, Jan 27, 2021 at 12:36:30PM +, Robin Murphy wrote:
> On 2021-01-27 12:19, Marcelo Tosatti wrote:
> > On Wed, Jan 27, 2021 at 11:57:16AM +, Robin Murphy wrote:
> > > Hi,
> > >
> > > On 2020-06-25 23:34, Nitesh Narayan Lal wrote:
> > > > From: Alex Belits
> > > >
> > > > The
On 2021-01-27 12:19, Marcelo Tosatti wrote:
On Wed, Jan 27, 2021 at 11:57:16AM +, Robin Murphy wrote:
Hi,
On 2020-06-25 23:34, Nitesh Narayan Lal wrote:
From: Alex Belits
The current implementation of cpumask_local_spread() does not respect the
isolated CPUs, i.e., even if a CPU has
On Wed, Jan 27, 2021 at 11:57:16AM +, Robin Murphy wrote:
> Hi,
>
> On 2020-06-25 23:34, Nitesh Narayan Lal wrote:
> > From: Alex Belits
> >
> > The current implementation of cpumask_local_spread() does not respect the
> > isolated CPUs, i.e., even if a CPU has been isolated for Real-Time
Hi,
On 2020-06-25 23:34, Nitesh Narayan Lal wrote:
From: Alex Belits
The current implementation of cpumask_local_spread() does not respect the
isolated CPUs, i.e., even if a CPU has been isolated for Real-Time task,
it will return it to the caller for pinning of its IRQ threads. Having
these
On 6/30/20 8:32 PM, Andrew Morton wrote:
> On Mon, 29 Jun 2020 12:11:25 -0400 Nitesh Narayan Lal
> wrote:
>
>> On 6/25/20 6:34 PM, Nitesh Narayan Lal wrote:
>>> From: Alex Belits
>>>
>>> The current implementation of cpumask_local_spread() does not respect the
>>> isolated CPUs, i.e., even if
On Mon, 29 Jun 2020 12:11:25 -0400 Nitesh Narayan Lal wrote:
>
> On 6/25/20 6:34 PM, Nitesh Narayan Lal wrote:
> > From: Alex Belits
> >
> > The current implementation of cpumask_local_spread() does not respect the
> > isolated CPUs, i.e., even if a CPU has been isolated for Real-Time task,
>
On 6/25/20 6:34 PM, Nitesh Narayan Lal wrote:
> From: Alex Belits
>
> The current implementation of cpumask_local_spread() does not respect the
> isolated CPUs, i.e., even if a CPU has been isolated for Real-Time task,
> it will return it to the caller for pinning of its IRQ threads. Having
>
From: Alex Belits
The current implementation of cpumask_local_spread() does not respect the
isolated CPUs, i.e., even if a CPU has been isolated for Real-Time task,
it will return it to the caller for pinning of its IRQ threads. Having
these unwanted IRQ threads on an isolated CPU adds up to a
34 matches
Mail list logo