Hello, Prateek.
On Mon, Jan 15, 2018 at 05:32:18PM +0530, Prateek Sood wrote:
> My understanding of WQ_MEM_RECLAIM was that it needs to be used for
> cases where memory pressure could cause deadlocks.
Yes, that is the primary role; however, there are a couple places
where we need it to isolate a
Hello, Prateek.
On Mon, Jan 15, 2018 at 05:32:18PM +0530, Prateek Sood wrote:
> My understanding of WQ_MEM_RECLAIM was that it needs to be used for
> cases where memory pressure could cause deadlocks.
Yes, that is the primary role; however, there are a couple places
where we need it to isolate a
On 01/02/2018 09:46 PM, Tejun Heo wrote:
> Hello,
>
> On Fri, Dec 29, 2017 at 02:07:16AM +0530, Prateek Sood wrote:
>> task T is waiting for cpuset_mutex acquired
>> by kworker/2:1
>>
>> sh ==> cpuhp/2 ==> kworker/2:1 ==> sh
>>
>> kworker/2:3 ==> kthreadd ==> Task T ==> kworker/2:1
>>
>> It
On 01/02/2018 09:46 PM, Tejun Heo wrote:
> Hello,
>
> On Fri, Dec 29, 2017 at 02:07:16AM +0530, Prateek Sood wrote:
>> task T is waiting for cpuset_mutex acquired
>> by kworker/2:1
>>
>> sh ==> cpuhp/2 ==> kworker/2:1 ==> sh
>>
>> kworker/2:3 ==> kthreadd ==> Task T ==> kworker/2:1
>>
>> It
On Wed, Jan 10, 2018 at 01:41:01PM -0800, Tejun Heo wrote:
> Hello, Paul.
>
> On Wed, Jan 10, 2018 at 12:08:21PM -0800, Paul E. McKenney wrote:
> > And one additional question... How are we pushing this upstream? By
> > default, I would push things starting this late into the merge window
> >
On Wed, Jan 10, 2018 at 01:41:01PM -0800, Tejun Heo wrote:
> Hello, Paul.
>
> On Wed, Jan 10, 2018 at 12:08:21PM -0800, Paul E. McKenney wrote:
> > And one additional question... How are we pushing this upstream? By
> > default, I would push things starting this late into the merge window
> >
Hello, Paul.
On Wed, Jan 10, 2018 at 12:08:21PM -0800, Paul E. McKenney wrote:
> And one additional question... How are we pushing this upstream? By
> default, I would push things starting this late into the merge window
> following the next one (v4.17), but would be more than willing to make
>
Hello, Paul.
On Wed, Jan 10, 2018 at 12:08:21PM -0800, Paul E. McKenney wrote:
> And one additional question... How are we pushing this upstream? By
> default, I would push things starting this late into the merge window
> following the next one (v4.17), but would be more than willing to make
>
On Tue, Jan 09, 2018 at 08:00:22AM -0800, Paul E. McKenney wrote:
> On Tue, Jan 09, 2018 at 07:37:52AM -0800, Tejun Heo wrote:
> > Hello, Paul.
> >
> > On Tue, Jan 09, 2018 at 07:21:12AM -0800, Paul E. McKenney wrote:
> > > > The code was previously using both system_power_efficient_wq and
> > >
On Tue, Jan 09, 2018 at 08:00:22AM -0800, Paul E. McKenney wrote:
> On Tue, Jan 09, 2018 at 07:37:52AM -0800, Tejun Heo wrote:
> > Hello, Paul.
> >
> > On Tue, Jan 09, 2018 at 07:21:12AM -0800, Paul E. McKenney wrote:
> > > > The code was previously using both system_power_efficient_wq and
> > >
On Tue, Jan 09, 2018 at 07:37:52AM -0800, Tejun Heo wrote:
> Hello, Paul.
>
> On Tue, Jan 09, 2018 at 07:21:12AM -0800, Paul E. McKenney wrote:
> > > The code was previously using both system_power_efficient_wq and
> > > system_workqueue (for the expedited path). I guess the options were
> > >
On Tue, Jan 09, 2018 at 07:37:52AM -0800, Tejun Heo wrote:
> Hello, Paul.
>
> On Tue, Jan 09, 2018 at 07:21:12AM -0800, Paul E. McKenney wrote:
> > > The code was previously using both system_power_efficient_wq and
> > > system_workqueue (for the expedited path). I guess the options were
> > >
Hello, Paul.
On Tue, Jan 09, 2018 at 07:21:12AM -0800, Paul E. McKenney wrote:
> > The code was previously using both system_power_efficient_wq and
> > system_workqueue (for the expedited path). I guess the options were
> > either using two workqueues or dropping POWER_EFFICIENT. I have no
> >
Hello, Paul.
On Tue, Jan 09, 2018 at 07:21:12AM -0800, Paul E. McKenney wrote:
> > The code was previously using both system_power_efficient_wq and
> > system_workqueue (for the expedited path). I guess the options were
> > either using two workqueues or dropping POWER_EFFICIENT. I have no
> >
On Tue, Jan 09, 2018 at 05:44:48AM -0800, Tejun Heo wrote:
> Hello, Paul.
>
> On Mon, Jan 08, 2018 at 08:20:16PM -0800, Paul E. McKenney wrote:
> > OK, so I can put WQ_MEM_RECLAIM on the early boot creation of RCU's
> > workqueue_struct as shown below, right?
>
> Yes, this looks good to me.
On Tue, Jan 09, 2018 at 05:44:48AM -0800, Tejun Heo wrote:
> Hello, Paul.
>
> On Mon, Jan 08, 2018 at 08:20:16PM -0800, Paul E. McKenney wrote:
> > OK, so I can put WQ_MEM_RECLAIM on the early boot creation of RCU's
> > workqueue_struct as shown below, right?
>
> Yes, this looks good to me.
Hello, Paul.
On Mon, Jan 08, 2018 at 08:20:16PM -0800, Paul E. McKenney wrote:
> OK, so I can put WQ_MEM_RECLAIM on the early boot creation of RCU's
> workqueue_struct as shown below, right?
Yes, this looks good to me. Just one question.
> +struct workqueue_struct *rcu_gp_workqueue;
> +
>
Hello, Paul.
On Mon, Jan 08, 2018 at 08:20:16PM -0800, Paul E. McKenney wrote:
> OK, so I can put WQ_MEM_RECLAIM on the early boot creation of RCU's
> workqueue_struct as shown below, right?
Yes, this looks good to me. Just one question.
> +struct workqueue_struct *rcu_gp_workqueue;
> +
>
On Mon, Jan 08, 2018 at 07:42:11PM -0800, Tejun Heo wrote:
> Hello, Paul.
>
> On Mon, Jan 08, 2018 at 04:31:27PM -0800, Paul E. McKenney wrote:
> > +static int __init rcu_init_wq_rescuer(void)
> > +{
> > + WARN_ON(init_rescuer(rcu_gp_workqueue));
> > + return 0;
> > +}
> >
On Mon, Jan 08, 2018 at 07:42:11PM -0800, Tejun Heo wrote:
> Hello, Paul.
>
> On Mon, Jan 08, 2018 at 04:31:27PM -0800, Paul E. McKenney wrote:
> > +static int __init rcu_init_wq_rescuer(void)
> > +{
> > + WARN_ON(init_rescuer(rcu_gp_workqueue));
> > + return 0;
> > +}
> >
Hello, Paul.
On Mon, Jan 08, 2018 at 04:31:27PM -0800, Paul E. McKenney wrote:
> +static int __init rcu_init_wq_rescuer(void)
> +{
> + WARN_ON(init_rescuer(rcu_gp_workqueue));
> + return 0;
> +}
> +core_initcall(rcu_init_wq_rescuer);
So, what I don't get is why RCU needs to call this
Hello, Paul.
On Mon, Jan 08, 2018 at 04:31:27PM -0800, Paul E. McKenney wrote:
> +static int __init rcu_init_wq_rescuer(void)
> +{
> + WARN_ON(init_rescuer(rcu_gp_workqueue));
> + return 0;
> +}
> +core_initcall(rcu_init_wq_rescuer);
So, what I don't get is why RCU needs to call this
On Mon, Jan 08, 2018 at 02:52:38PM -0800, Paul E. McKenney wrote:
> On Mon, Jan 08, 2018 at 04:28:23AM -0800, Tejun Heo wrote:
> > Hello, Paul.
> >
> > Sorry about the delay. Travel followed by cold. :(
> >
> > On Tue, Jan 02, 2018 at 10:01:19AM -0800, Paul E. McKenney wrote:
> > > Actually,
On Mon, Jan 08, 2018 at 02:52:38PM -0800, Paul E. McKenney wrote:
> On Mon, Jan 08, 2018 at 04:28:23AM -0800, Tejun Heo wrote:
> > Hello, Paul.
> >
> > Sorry about the delay. Travel followed by cold. :(
> >
> > On Tue, Jan 02, 2018 at 10:01:19AM -0800, Paul E. McKenney wrote:
> > > Actually,
On Mon, Jan 08, 2018 at 04:28:23AM -0800, Tejun Heo wrote:
> Hello, Paul.
>
> Sorry about the delay. Travel followed by cold. :(
>
> On Tue, Jan 02, 2018 at 10:01:19AM -0800, Paul E. McKenney wrote:
> > Actually, after taking a quick look, could you please supply me with
> > a way of mark a
On Mon, Jan 08, 2018 at 04:28:23AM -0800, Tejun Heo wrote:
> Hello, Paul.
>
> Sorry about the delay. Travel followed by cold. :(
>
> On Tue, Jan 02, 2018 at 10:01:19AM -0800, Paul E. McKenney wrote:
> > Actually, after taking a quick look, could you please supply me with
> > a way of mark a
Hello, Paul.
Sorry about the delay. Travel followed by cold. :(
On Tue, Jan 02, 2018 at 10:01:19AM -0800, Paul E. McKenney wrote:
> Actually, after taking a quick look, could you please supply me with
> a way of mark a statically allocated workqueue as WQ_MEM_RECLAIM after
> the fact?
Hello, Paul.
Sorry about the delay. Travel followed by cold. :(
On Tue, Jan 02, 2018 at 10:01:19AM -0800, Paul E. McKenney wrote:
> Actually, after taking a quick look, could you please supply me with
> a way of mark a statically allocated workqueue as WQ_MEM_RECLAIM after
> the fact?
On Tue, Jan 02, 2018 at 09:44:08AM -0800, Paul E. McKenney wrote:
> On Tue, Jan 02, 2018 at 08:16:56AM -0800, Tejun Heo wrote:
> > Hello,
> >
> > On Fri, Dec 29, 2017 at 02:07:16AM +0530, Prateek Sood wrote:
> > > task T is waiting for cpuset_mutex acquired
> > > by kworker/2:1
> > >
> > > sh
On Tue, Jan 02, 2018 at 09:44:08AM -0800, Paul E. McKenney wrote:
> On Tue, Jan 02, 2018 at 08:16:56AM -0800, Tejun Heo wrote:
> > Hello,
> >
> > On Fri, Dec 29, 2017 at 02:07:16AM +0530, Prateek Sood wrote:
> > > task T is waiting for cpuset_mutex acquired
> > > by kworker/2:1
> > >
> > > sh
On Tue, Jan 02, 2018 at 08:16:56AM -0800, Tejun Heo wrote:
> Hello,
>
> On Fri, Dec 29, 2017 at 02:07:16AM +0530, Prateek Sood wrote:
> > task T is waiting for cpuset_mutex acquired
> > by kworker/2:1
> >
> > sh ==> cpuhp/2 ==> kworker/2:1 ==> sh
> >
> > kworker/2:3 ==> kthreadd ==> Task T ==>
On Tue, Jan 02, 2018 at 08:16:56AM -0800, Tejun Heo wrote:
> Hello,
>
> On Fri, Dec 29, 2017 at 02:07:16AM +0530, Prateek Sood wrote:
> > task T is waiting for cpuset_mutex acquired
> > by kworker/2:1
> >
> > sh ==> cpuhp/2 ==> kworker/2:1 ==> sh
> >
> > kworker/2:3 ==> kthreadd ==> Task T ==>
Hello,
On Fri, Dec 29, 2017 at 02:07:16AM +0530, Prateek Sood wrote:
> task T is waiting for cpuset_mutex acquired
> by kworker/2:1
>
> sh ==> cpuhp/2 ==> kworker/2:1 ==> sh
>
> kworker/2:3 ==> kthreadd ==> Task T ==> kworker/2:1
>
> It seems that my earlier patch set should fix this
Hello,
On Fri, Dec 29, 2017 at 02:07:16AM +0530, Prateek Sood wrote:
> task T is waiting for cpuset_mutex acquired
> by kworker/2:1
>
> sh ==> cpuhp/2 ==> kworker/2:1 ==> sh
>
> kworker/2:3 ==> kthreadd ==> Task T ==> kworker/2:1
>
> It seems that my earlier patch set should fix this
On 12/13/2017 09:36 PM, Tejun Heo wrote:
> Hello, Prateek.
>
> On Wed, Dec 13, 2017 at 01:20:46PM +0530, Prateek Sood wrote:
>> This change makes the usage of cpuset_hotplug_workfn() from cpu
>> hotplug path synchronous. For memory hotplug it still remains
>> asynchronous.
>
> Ah, right.
>
>>
On 12/13/2017 09:36 PM, Tejun Heo wrote:
> Hello, Prateek.
>
> On Wed, Dec 13, 2017 at 01:20:46PM +0530, Prateek Sood wrote:
>> This change makes the usage of cpuset_hotplug_workfn() from cpu
>> hotplug path synchronous. For memory hotplug it still remains
>> asynchronous.
>
> Ah, right.
>
>>
On 12/15/2017 06:52 PM, Tejun Heo wrote:
> Hello, Prateek.
>
> On Fri, Dec 15, 2017 at 02:24:55PM +0530, Prateek Sood wrote:
>> Following are two ways to improve cgroup_transfer_tasks(). In
>> both cases task in PF_EXITING state would be left in source
>> cgroup. It would be removed from
On 12/15/2017 06:52 PM, Tejun Heo wrote:
> Hello, Prateek.
>
> On Fri, Dec 15, 2017 at 02:24:55PM +0530, Prateek Sood wrote:
>> Following are two ways to improve cgroup_transfer_tasks(). In
>> both cases task in PF_EXITING state would be left in source
>> cgroup. It would be removed from
On 12/13/2017 09:36 PM, Tejun Heo wrote:
> Hello, Prateek.
>
> On Wed, Dec 13, 2017 at 01:20:46PM +0530, Prateek Sood wrote:
>> This change makes the usage of cpuset_hotplug_workfn() from cpu
>> hotplug path synchronous. For memory hotplug it still remains
>> asynchronous.
>
> Ah, right.
>
>>
On 12/13/2017 09:36 PM, Tejun Heo wrote:
> Hello, Prateek.
>
> On Wed, Dec 13, 2017 at 01:20:46PM +0530, Prateek Sood wrote:
>> This change makes the usage of cpuset_hotplug_workfn() from cpu
>> hotplug path synchronous. For memory hotplug it still remains
>> asynchronous.
>
> Ah, right.
>
>>
Hello, Prateek.
On Fri, Dec 15, 2017 at 02:24:55PM +0530, Prateek Sood wrote:
> Following are two ways to improve cgroup_transfer_tasks(). In
> both cases task in PF_EXITING state would be left in source
> cgroup. It would be removed from cgroup_exit() in exit path.
>
> diff --git
Hello, Prateek.
On Fri, Dec 15, 2017 at 02:24:55PM +0530, Prateek Sood wrote:
> Following are two ways to improve cgroup_transfer_tasks(). In
> both cases task in PF_EXITING state would be left in source
> cgroup. It would be removed from cgroup_exit() in exit path.
>
> diff --git
On 12/13/2017 09:10 PM, Tejun Heo wrote:
Hi TJ,
> Hello, Prateek.
>
> On Wed, Dec 13, 2017 at 07:58:24PM +0530, Prateek Sood wrote:
>> Did you mean something like below. If not then could you
>> please share a patch for this problem in
>> cgroup_transfer_tasks().
>
> Oh we surely can add a new
On 12/13/2017 09:10 PM, Tejun Heo wrote:
Hi TJ,
> Hello, Prateek.
>
> On Wed, Dec 13, 2017 at 07:58:24PM +0530, Prateek Sood wrote:
>> Did you mean something like below. If not then could you
>> please share a patch for this problem in
>> cgroup_transfer_tasks().
>
> Oh we surely can add a new
Hello, Prateek.
On Wed, Dec 13, 2017 at 01:20:46PM +0530, Prateek Sood wrote:
> This change makes the usage of cpuset_hotplug_workfn() from cpu
> hotplug path synchronous. For memory hotplug it still remains
> asynchronous.
Ah, right.
> Memory migration happening from cpuset_hotplug_workfn() is
Hello, Prateek.
On Wed, Dec 13, 2017 at 01:20:46PM +0530, Prateek Sood wrote:
> This change makes the usage of cpuset_hotplug_workfn() from cpu
> hotplug path synchronous. For memory hotplug it still remains
> asynchronous.
Ah, right.
> Memory migration happening from cpuset_hotplug_workfn() is
Hello, Prateek.
On Wed, Dec 13, 2017 at 07:58:24PM +0530, Prateek Sood wrote:
> Did you mean something like below. If not then could you
> please share a patch for this problem in
> cgroup_transfer_tasks().
Oh we surely can add a new iterator but we can just count in
cgroup_transfer_tasks() too,
Hello, Prateek.
On Wed, Dec 13, 2017 at 07:58:24PM +0530, Prateek Sood wrote:
> Did you mean something like below. If not then could you
> please share a patch for this problem in
> cgroup_transfer_tasks().
Oh we surely can add a new iterator but we can just count in
cgroup_transfer_tasks() too,
On 12/11/2017 09:02 PM, Tejun Heo wrote:
> Hello, Prateek.
>
> On Fri, Dec 08, 2017 at 05:15:55PM +0530, Prateek Sood wrote:
>> There is one deadlock issue during cgroup migration from cpu
>> hotplug path when a task T is being moved from source to
>> destination cgroup.
>>
>> kworker/0:0
>>
On 12/11/2017 09:02 PM, Tejun Heo wrote:
> Hello, Prateek.
>
> On Fri, Dec 08, 2017 at 05:15:55PM +0530, Prateek Sood wrote:
>> There is one deadlock issue during cgroup migration from cpu
>> hotplug path when a task T is being moved from source to
>> destination cgroup.
>>
>> kworker/0:0
>>
On 12/11/2017 08:50 PM, Tejun Heo wrote:
> Hello, Peter.
>
> On Tue, Dec 05, 2017 at 12:01:17AM +0100, Peter Zijlstra wrote:
>>> AFAICS, this should remove the circular dependency you originally
>>> reported. I'll revert the two cpuset commits for now.
>>
>> So I liked his patches in that we
On 12/11/2017 08:50 PM, Tejun Heo wrote:
> Hello, Peter.
>
> On Tue, Dec 05, 2017 at 12:01:17AM +0100, Peter Zijlstra wrote:
>>> AFAICS, this should remove the circular dependency you originally
>>> reported. I'll revert the two cpuset commits for now.
>>
>> So I liked his patches in that we
Hello, Prateek.
On Fri, Dec 08, 2017 at 05:15:55PM +0530, Prateek Sood wrote:
> There is one deadlock issue during cgroup migration from cpu
> hotplug path when a task T is being moved from source to
> destination cgroup.
>
> kworker/0:0
> cpuset_hotplug_workfn()
>
Hello, Prateek.
On Fri, Dec 08, 2017 at 05:15:55PM +0530, Prateek Sood wrote:
> There is one deadlock issue during cgroup migration from cpu
> hotplug path when a task T is being moved from source to
> destination cgroup.
>
> kworker/0:0
> cpuset_hotplug_workfn()
>
Hello, Peter.
On Tue, Dec 05, 2017 at 12:01:17AM +0100, Peter Zijlstra wrote:
> > AFAICS, this should remove the circular dependency you originally
> > reported. I'll revert the two cpuset commits for now.
>
> So I liked his patches in that we would be able to go back to
> synchronous
Hello, Peter.
On Tue, Dec 05, 2017 at 12:01:17AM +0100, Peter Zijlstra wrote:
> > AFAICS, this should remove the circular dependency you originally
> > reported. I'll revert the two cpuset commits for now.
>
> So I liked his patches in that we would be able to go back to
> synchronous
On 12/08/2017 03:10 PM, Prateek Sood wrote:
> On 12/05/2017 04:31 AM, Peter Zijlstra wrote:
>> On Mon, Dec 04, 2017 at 02:58:25PM -0800, Tejun Heo wrote:
>>> Hello, again.
>>>
>>> On Mon, Dec 04, 2017 at 12:22:19PM -0800, Tejun Heo wrote:
Hello,
On Mon, Dec 04, 2017 at 10:44:49AM
On 12/08/2017 03:10 PM, Prateek Sood wrote:
> On 12/05/2017 04:31 AM, Peter Zijlstra wrote:
>> On Mon, Dec 04, 2017 at 02:58:25PM -0800, Tejun Heo wrote:
>>> Hello, again.
>>>
>>> On Mon, Dec 04, 2017 at 12:22:19PM -0800, Tejun Heo wrote:
Hello,
On Mon, Dec 04, 2017 at 10:44:49AM
On 12/05/2017 04:31 AM, Peter Zijlstra wrote:
> On Mon, Dec 04, 2017 at 02:58:25PM -0800, Tejun Heo wrote:
>> Hello, again.
>>
>> On Mon, Dec 04, 2017 at 12:22:19PM -0800, Tejun Heo wrote:
>>> Hello,
>>>
>>> On Mon, Dec 04, 2017 at 10:44:49AM +0530, Prateek Sood wrote:
Any feedback/suggestion
On 12/05/2017 04:31 AM, Peter Zijlstra wrote:
> On Mon, Dec 04, 2017 at 02:58:25PM -0800, Tejun Heo wrote:
>> Hello, again.
>>
>> On Mon, Dec 04, 2017 at 12:22:19PM -0800, Tejun Heo wrote:
>>> Hello,
>>>
>>> On Mon, Dec 04, 2017 at 10:44:49AM +0530, Prateek Sood wrote:
Any feedback/suggestion
On Mon, Dec 04, 2017 at 02:58:25PM -0800, Tejun Heo wrote:
> Hello, again.
>
> On Mon, Dec 04, 2017 at 12:22:19PM -0800, Tejun Heo wrote:
> > Hello,
> >
> > On Mon, Dec 04, 2017 at 10:44:49AM +0530, Prateek Sood wrote:
> > > Any feedback/suggestion for this patch?
> >
> > Sorry about the delay.
On Mon, Dec 04, 2017 at 02:58:25PM -0800, Tejun Heo wrote:
> Hello, again.
>
> On Mon, Dec 04, 2017 at 12:22:19PM -0800, Tejun Heo wrote:
> > Hello,
> >
> > On Mon, Dec 04, 2017 at 10:44:49AM +0530, Prateek Sood wrote:
> > > Any feedback/suggestion for this patch?
> >
> > Sorry about the delay.
Hello, again.
On Mon, Dec 04, 2017 at 12:22:19PM -0800, Tejun Heo wrote:
> Hello,
>
> On Mon, Dec 04, 2017 at 10:44:49AM +0530, Prateek Sood wrote:
> > Any feedback/suggestion for this patch?
>
> Sorry about the delay. I'm a bit worried because it feels like we're
> chasing a squirrel. I'll
Hello, again.
On Mon, Dec 04, 2017 at 12:22:19PM -0800, Tejun Heo wrote:
> Hello,
>
> On Mon, Dec 04, 2017 at 10:44:49AM +0530, Prateek Sood wrote:
> > Any feedback/suggestion for this patch?
>
> Sorry about the delay. I'm a bit worried because it feels like we're
> chasing a squirrel. I'll
Hello,
On Mon, Dec 04, 2017 at 10:44:49AM +0530, Prateek Sood wrote:
> Any feedback/suggestion for this patch?
Sorry about the delay. I'm a bit worried because it feels like we're
chasing a squirrel. I'll think through the recent changes and this
one and get back to you.
Thanks.
--
tejun
Hello,
On Mon, Dec 04, 2017 at 10:44:49AM +0530, Prateek Sood wrote:
> Any feedback/suggestion for this patch?
Sorry about the delay. I'm a bit worried because it feels like we're
chasing a squirrel. I'll think through the recent changes and this
one and get back to you.
Thanks.
--
tejun
On 11/28/2017 05:05 PM, Prateek Sood wrote:
> CPU1
> cpus_read_lock+0x3e/0x80
> static_key_slow_inc+0xe/0xa0
> cpuset_css_online+0x62/0x330
> online_css+0x26/0x80
> cgroup_apply_control_enable+0x266/0x3d0
> cgroup_mkdir+0x37d/0x4f0
> kernfs_iop_mkdir+0x53/0x80
> vfs_mkdir+0x10e/0x1a0
>
On 11/28/2017 05:05 PM, Prateek Sood wrote:
> CPU1
> cpus_read_lock+0x3e/0x80
> static_key_slow_inc+0xe/0xa0
> cpuset_css_online+0x62/0x330
> online_css+0x26/0x80
> cgroup_apply_control_enable+0x266/0x3d0
> cgroup_mkdir+0x37d/0x4f0
> kernfs_iop_mkdir+0x53/0x80
> vfs_mkdir+0x10e/0x1a0
>
CPU1
cpus_read_lock+0x3e/0x80
static_key_slow_inc+0xe/0xa0
cpuset_css_online+0x62/0x330
online_css+0x26/0x80
cgroup_apply_control_enable+0x266/0x3d0
cgroup_mkdir+0x37d/0x4f0
kernfs_iop_mkdir+0x53/0x80
vfs_mkdir+0x10e/0x1a0
SyS_mkdirat+0xb3/0xe0
entry_SYSCALL_64_fastpath+0x23/0x9a
CPU0
CPU1
cpus_read_lock+0x3e/0x80
static_key_slow_inc+0xe/0xa0
cpuset_css_online+0x62/0x330
online_css+0x26/0x80
cgroup_apply_control_enable+0x266/0x3d0
cgroup_mkdir+0x37d/0x4f0
kernfs_iop_mkdir+0x53/0x80
vfs_mkdir+0x10e/0x1a0
SyS_mkdirat+0xb3/0xe0
entry_SYSCALL_64_fastpath+0x23/0x9a
CPU0
70 matches
Mail list logo