pping")
Signed-off-by: Prateek Sood
Reviewed-by: Takashi Iwai
Cc: sta...@vger.kernel.org
---
drivers/base/firmware_loader/firmware.h | 2 ++
drivers/base/firmware_loader/main.c | 17 +++--
2 files changed, 13 insertions(+), 6 deletions(-)
diff --git a/drivers/base/firmw
pping")
Signed-off-by: Prateek Sood
Reviewed-by: Takashi Iwai
---
drivers/base/firmware_loader/firmware.h | 2 ++
drivers/base/firmware_loader/main.c | 17 +++--
2 files changed, 13 insertions(+), 6 deletions(-)
diff --git a/drivers/base/firmware_loader/firmware.h
b/dr
On 8/13/2020 6:28 PM, Takashi Iwai wrote:
On Wed, 12 Aug 2020 21:00:19 +0200,
Prateek Sood wrote:
vfree() is being called on paged buffer allocated
using alloc_page() and mapped using vmap().
Freeing of pages in vfree() relies on nr_pages of
struct vm_struct. vmap() does not update nr_pages
vfree() is being called on paged buffer allocated
using alloc_page() and mapped using vmap().
Freeing of pages in vfree() relies on nr_pages of
struct vm_struct. vmap() does not update nr_pages.
It can lead to memory leaks.
Signed-off-by: Prateek Sood
---
drivers/base/firmware_loader
t the 'perf_kprobe' PMU")
>
> -- Steve
>
>
> On Tue, 15 Oct 2019 11:47:25 +0530
> Prateek Sood wrote:
>
>> [ 943.034988] Unable to handle kernel paging request at virtual address
>> 003106f2003c
>> [ 943.043653] Mem abort info:
>> [ 9
On 10/15/19 11:47 AM, Prateek Sood wrote:
> [ 943.034988] Unable to handle kernel paging request at virtual address
> 003106f2003c
> [ 943.043653] Mem abort info:
> [ 943.046679] ESR = 0x9645
> [ 943.050428] Exception class = DABT (current EL), IL = 32 bits
> [
buf to be NULL always. This can result in perf_trace_buf
getting accessed from perf_trace_buf_alloc() without being initialized.
Acquiring
event_mutex in perf_kprobe_init() before calling perf_trace_event_init() should
fix this race.
Signed-off-by: Prateek Sood
---
kernel/trace/trace_event_perf.c | 4
in kobject_get(k). And CPU2 has been called
> + * kernfs_create_dir_ns(). Meanwhile, CPU1 call sysfs_remove_dir()
> + * and sysfs_put(). This result in glue_dir->sd is freed.
> + *
> + * Then the CPU2 will see a stale "empty" but still potentially used
> + * glue dir arou
On 7/24/19 9:30 PM, Muchun Song wrote:
> There is a race condition between removing glue directory and adding a new
> device under the glue directory. It can be reproduced in following test:
>
> path 1: Add the child device under glue dir
> device_add()
> get_device_parent()
> mutex_lo
On 5/14/19 4:26 PM, Mukesh Ojha wrote:
> ++
>
> On 5/4/2019 8:17 PM, Muchun Song wrote:
>> Benjamin Herrenschmidt 于2019年5月2日周四 下午2:25写道:
>>
> The basic idea yes, the whole bool *locked is horrid though.
> Wouldn't it
> work to have a get_device_parent_locked that always returns with
>
On 5/1/19 5:29 PM, Prateek Sood wrote:
> While loading firmware blobs parallely in different threads, it is possible
> to free sysfs node of glue_dirs in device_del() from a thread while another
> thread is trying to add subdir from device_add() in glue_dirs sysfs node.
>
>sd
kernfs_new_node()
kernfs_get(glue_dir)
Fix this race by making sure that kernfs_node for glue_dir is released only
when refcount for glue_dir kobj is 1.
Signed-off-by: Prateek Sood
---
Changes from v2->v3:
- Added patc
>sd
kernfs_new_node()
kernfs_get(glue_dir)
Fix this race by making sure that kernfs_node for glue_dir is released only
when refcount for glue_dir kobj is 1.
Signed-off-by: Prateek Sood
---
drivers/base/core.c | 5 -
1 file c
>sd
kernfs_new_node()
kernfs_get(glue_dir)
Fix this race by making sure that kernfs_node for glue_dir is released only
when refcount for glue_dir kobj is 1.
Signed-off-by: Prateek Sood
---
drivers/base/core.c | 5 -
1 file c
ng access from memset()
in perf_trace_buf_alloc().
Change-Id: I95ae774b9fcc653aa808f2d9f3e4359b3605e909
Signed-off-by: Prateek Sood
---
include/linux/trace_events.h| 2 ++
include/trace/perf.h| 5 +++-
kernel/trace/trace_event_perf.c | 63 ++--
A potential race exists between access of perf_trace_buf[i] from
perf_trace_buf_alloc() and perf_trace_event_unreg(). This can
result in perf_trace_buf[i] being NULL during access from memset()
in perf_trace_buf_alloc().
Signed-off-by: Prateek Sood
---
include/linux/trace_events.h| 2
Commit-ID: 6dc080eeb2ba01973bfff0d79844d7a59e12542e
Gitweb: https://git.kernel.org/tip/6dc080eeb2ba01973bfff0d79844d7a59e12542e
Author: Prateek Sood
AuthorDate: Fri, 30 Nov 2018 20:40:56 +0530
Committer: Ingo Molnar
CommitDate: Mon, 21 Jan 2019 11:15:36 +0100
sched/wait: Fix
before smp_rmb()
with load after the smp_rmb().
For the usage of rcuwait_wake_up() in __percpu_up_read() full barrier
(smp_mb) is required to complete the constraint of rcuwait_wake_up().
Signed-off-by: Prateek Sood
Acked-by: Davidlohr Bueso
---
kernel/exit.c | 4 ++--
1 file changed, 2 inse
On 12/12/2018 08:58 PM, Andrea Parri wrote:
> On Fri, Nov 30, 2018 at 08:40:56PM +0530, Prateek Sood wrote:
>> In a scenario where cpu_hotplug_lock percpu_rw_semaphore is already
>> acquired for read operation by P1 using percpu_down_read().
>>
>> Now we have P1 in
On 12/04/2018 01:06 AM, Prateek Sood wrote:
> On 12/03/2018 12:08 PM, Davidlohr Bueso wrote:
>> On 2018-11-30 07:10, Prateek Sood wrote:
>>> In a scenario where cpu_hotplug_lock percpu_rw_semaphore is already
>>> acquired for read operation by P1 using percpu_down_read()
On 12/03/2018 12:08 PM, Davidlohr Bueso wrote:
> On 2018-11-30 07:10, Prateek Sood wrote:
>> In a scenario where cpu_hotplug_lock percpu_rw_semaphore is already
>> acquired for read operation by P1 using percpu_down_read().
>>
>> Now we have P1 in the path of releaseing
mb) is required to complete the constraint of rcuwait_wake_up().
Signed-off-by: Prateek Sood
---
kernel/exit.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/exit.c b/kernel/exit.c
index f1d74f0..a10820d 100644
--- a/kernel/exit.c
+++ b/kernel/exit.c
@@ -306,7 +306,7 @
On 02/02/2018 06:49 PM, Rafael J. Wysocki wrote:
> On Fri, Feb 2, 2018 at 1:53 PM, Prateek Sood wrote:
>> On 02/02/2018 05:18 PM, Rafael J. Wysocki wrote:
>>> On Friday, February 2, 2018 12:41:58 PM CET Prateek Sood wrote:
>>>> Hi Viresh,
>>>>
>>
On 02/02/2018 05:18 PM, Rafael J. Wysocki wrote:
> On Friday, February 2, 2018 12:41:58 PM CET Prateek Sood wrote:
>> Hi Viresh,
>>
>> One scenario is there where a kernel panic is observed in
>> cpufreq during suspend/resume.
>>
>> pm_s
lem.
---8<--
Co-developed-by: Gaurav Kohli
Signed-off-by: Gaurav Kohli
Signed-off-by: Prateek Sood
diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index 02a497e..732e5a2 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -
On 01/02/2018 09:46 PM, Tejun Heo wrote:
> Hello,
>
> On Fri, Dec 29, 2017 at 02:07:16AM +0530, Prateek Sood wrote:
>> task T is waiting for cpuset_mutex acquired
>> by kworker/2:1
>>
>> sh ==> cpuhp/2 ==> kworker/2:1 ==> sh
>>
>> kworke
On 12/13/2017 09:36 PM, Tejun Heo wrote:
> Hello, Prateek.
>
> On Wed, Dec 13, 2017 at 01:20:46PM +0530, Prateek Sood wrote:
>> This change makes the usage of cpuset_hotplug_workfn() from cpu
>> hotplug path synchronous. For memory hotplug it still remains
>> a
nge-Id: I8874fb04479c136cae4dabd5c168c7749df4
Signed-off-by: Prateek Sood
---
kernel/cgroup/cgroup-v1.c | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/kernel/cgroup/cgroup-v1.c b/kernel/cgroup/cgroup-v1.c
index 024085d..a2c05d2 100644
--- a/kernel/cgroup/cgroup-v1.c
+++ b/kernel/cgroup/cgroup-v1.c
@@
On 12/15/2017 06:52 PM, Tejun Heo wrote:
> Hello, Prateek.
>
> On Fri, Dec 15, 2017 at 02:24:55PM +0530, Prateek Sood wrote:
>> Following are two ways to improve cgroup_transfer_tasks(). In
>> both cases task in PF_EXITING state would be left in source
>> cgrou
On 12/13/2017 09:36 PM, Tejun Heo wrote:
> Hello, Prateek.
>
> On Wed, Dec 13, 2017 at 01:20:46PM +0530, Prateek Sood wrote:
>> This change makes the usage of cpuset_hotplug_workfn() from cpu
>> hotplug path synchronous. For memory hotplug it still remains
>> a
On 12/13/2017 09:10 PM, Tejun Heo wrote:
Hi TJ,
> Hello, Prateek.
>
> On Wed, Dec 13, 2017 at 07:58:24PM +0530, Prateek Sood wrote:
>> Did you mean something like below. If not then could you
>> please share a patch for this problem in
>> cgroup_transfer_tasks().
>
On 12/11/2017 09:02 PM, Tejun Heo wrote:
> Hello, Prateek.
>
> On Fri, Dec 08, 2017 at 05:15:55PM +0530, Prateek Sood wrote:
>> There is one deadlock issue during cgroup migration from cpu
>> hotplug path when a task T is being moved from source to
>> destinatio
On 12/11/2017 08:50 PM, Tejun Heo wrote:
> Hello, Peter.
>
> On Tue, Dec 05, 2017 at 12:01:17AM +0100, Peter Zijlstra wrote:
>>> AFAICS, this should remove the circular dependency you originally
>>> reported. I'll revert the two cpuset commits for now.
>>
>> So I liked his patches in that we woul
On 12/08/2017 03:10 PM, Prateek Sood wrote:
> On 12/05/2017 04:31 AM, Peter Zijlstra wrote:
>> On Mon, Dec 04, 2017 at 02:58:25PM -0800, Tejun Heo wrote:
>>> Hello, again.
>>>
>>> On Mon, Dec 04, 2017 at 12:22:19PM -0800, Tejun Heo wrote:
>>>> Hel
On 12/05/2017 04:31 AM, Peter Zijlstra wrote:
> On Mon, Dec 04, 2017 at 02:58:25PM -0800, Tejun Heo wrote:
>> Hello, again.
>>
>> On Mon, Dec 04, 2017 at 12:22:19PM -0800, Tejun Heo wrote:
>>> Hello,
>>>
>>> On Mon, Dec 04, 2017 at 10:44:49AM +0530,
On 11/28/2017 05:05 PM, Prateek Sood wrote:
> CPU1
> cpus_read_lock+0x3e/0x80
> static_key_slow_inc+0xe/0xa0
> cpuset_css_online+0x62/0x330
> online_css+0x26/0x80
> cgroup_apply_control_enable+0x266/0x3d0
> cgroup_mkdir+0x37d/0x4f0
> kernfs_iop_mkdir+0x53/0x80
> vfs_mkdi
static_branch_inc/static_branch_dec in
cpuset_inc()/cpuset_dec().
Signed-off-by: Prateek Sood
---
include/linux/cpuset.h | 8
include/linux/jump_label.h | 10 --
kernel/cgroup/cpuset.c | 4 ++--
kernel/jump_label.c| 13 +
4 files changed, 27 insertions
On 11/15/2017 10:35 PM, Tejun Heo wrote:
> On Wed, Nov 15, 2017 at 11:37:42AM +0100, Peter Zijlstra wrote:
>> On Wed, Nov 15, 2017 at 03:56:26PM +0530, Prateek Sood wrote:
>>> Any improvement/suggestion for this patch?
>>
>> I would have done 2 patches, the first
-by: Prateek Sood
---
include/linux/cpuset.h | 6 --
kernel/cgroup/cpuset.c | 41 -
kernel/power/process.c | 2 --
kernel/sched/core.c| 1 -
4 files changed, 20 insertions(+), 30 deletions(-)
diff --git a/include/linux/cpuset.h b/include/linux
();
update_cpumasks_hier();
rebuild_sched_domains_locked();
get_online_cpus();
percpu_down_read(&cpu_hotplug_lock); //waiting
Eliminating deadlock by reversing the locking order for cpuset_mutex and
cpu_hotplug_lock.
Signed-off-by: Prateek So
This patch does following
1- Remove circular dependency deadlock by inverting order of
cpu_hotplug_lock and cpuset_mutex.
2- Make cpuset_hotplug_workfn() synchronous for cpu hotplug path.
For memory hotplug path it still gets queued as a work item.
Prateek Sood (2):
cgroup
On 10/30/2017 12:46 PM, Prateek Sood wrote:
> Remove circular dependency deadlock in a scenario where hotplug of CPU is
> being done while there is updation in cgroup and cpuset triggered from
> userspace.
>
> Process A => kthreadd => Process B => Process C =&g
On 10/30/2017 12:46 PM, Prateek Sood wrote:
> Remove circular dependency deadlock in a scenario where hotplug of CPU is
> being done while there is updation in cgroup and cpuset triggered from
> userspace.
>
> Process A => kthreadd => Process B => Process C =&g
tex, cpuset_hotplug_workfn() related functionality can be
done synchronously from the context doing cpu hotplug. For memory hotplug
it still gets queued as a work item.
Signed-off-by: Prateek Sood
---
include/linux/cpuset.h | 6
kernel/cgroup/cpuset.c | 94 +++---
On 10/26/2017 07:35 PM, Waiman Long wrote:
> On 10/26/2017 07:52 AM, Prateek Sood wrote:
>> Remove circular dependency deadlock in a scenario where hotplug of CPU is
>> being done while there is updation in cgroup and cpuset triggered from
>> userspace.
>>
>> P
and
cpu_hotplug_lock.
Signed-off-by: Prateek Sood
---
include/linux/cpuset.h | 6 -
kernel/cgroup/cpuset.c | 70 ++
kernel/power/process.c | 2 --
kernel/sched/core.c| 1 -
4 files changed, 36 insertions(+), 43 deletions(-)
diff --gi
On 10/11/2017 03:18 PM, Peter Zijlstra wrote:
> On Mon, Oct 09, 2017 at 06:57:46PM +0530, Prateek Sood wrote:
>> On 09/07/2017 11:21 PM, Peter Zijlstra wrote:
>
>>> But if you invert these locks, the need for cpuset_hotplug_workfn() goes
>>> away, at least for th
On 09/07/2017 11:21 PM, Peter Zijlstra wrote:
> On Thu, Sep 07, 2017 at 07:26:23PM +0530, Prateek Sood wrote:
>> Remove circular dependency deadlock in a scenario where hotplug of CPU is
>> being done while there is updation in cgroup and cpuset triggered from
>> userspace.
Commit-ID: 9c29c31830a4eca724e137a9339137204bbb31be
Gitweb: https://git.kernel.org/tip/9c29c31830a4eca724e137a9339137204bbb31be
Author: Prateek Sood
AuthorDate: Thu, 7 Sep 2017 20:00:58 +0530
Committer: Ingo Molnar
CommitDate: Fri, 29 Sep 2017 10:10:20 +0200
locking/rwsem-xadd: Fix
On 09/07/2017 08:00 PM, Prateek Sood wrote:
> If a spinner is present, there is a chance that the load of
> rwsem_has_spinner() in rwsem_wake() can be reordered with
> respect to decrement of rwsem count in __up_write() leading
> to wakeup being missed.
>
>
On 09/07/2017 11:15 PM, Peter Zijlstra wrote:
> On Thu, Sep 07, 2017 at 07:26:23PM +0530, Prateek Sood wrote:
>> Remove circular dependency deadlock in a scenario where hotplug of CPU is
>> being done while there is updation in cgroup and cpuset triggered from
>> users
sulted after sem->count is updated in up_write context.
Signed-off-by: Prateek Sood
---
kernel/locking/rwsem-xadd.c | 27 +++
1 file changed, 27 insertions(+)
diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
index 02f6606..1fefe6d 100644
--- a
t;
>>> On Wed, Aug 23, 2017 at 04:58:55PM +0530, Prateek Sood wrote:
>>>> If a spinner is present, there is a chance that the load of
>>>> rwsem_has_spinner() in rwsem_wake() can be reordered with
>>>> respect to decrement of rwsem cou
mutex_lock(&cpuset_mutex); //held
update_cpumask();
update_cpumasks_hier();
rebuild_sched_domains_locked();
get_online_cpus();
percpu_down_read(&cpu_hotplug_lock); //waiting
Signed-off-by: Prateek Sood
---
kernel/cgroup/cpuset.c | 32
On 09/07/2017 02:26 PM, Boqun Feng wrote:
> On Thu, Sep 07, 2017 at 09:28:48AM +0200, Peter Zijlstra wrote:
>> On Thu, Sep 07, 2017 at 11:34:12AM +0530, Prateek Sood wrote:
>>> Remove circular dependency deadlock in a scenario where hotplug of CPU is
>>> being done
On 09/07/2017 12:58 PM, Peter Zijlstra wrote:
> On Thu, Sep 07, 2017 at 11:34:12AM +0530, Prateek Sood wrote:
>> Remove circular dependency deadlock in a scenario where hotplug of CPU is
>> being done while there is updation in cgroup and cpuset triggered from
>> userspace.
&
waiting]
init:1 - lock(cpuset_mutex) [held]
percpu_down_read(&cpu_hotplug_lock) [waiting]
Eliminate this dependecy by reordering locking of cpuset_mutex
and cpu_hotplug_lock in following order
1. Acquire cpu_hotplug_lock (read)
2. Acquire cpuset_mutex
Signed-off-by: Prate
On 09/06/2017 06:26 PM, Waiman Long wrote:
> On 09/06/2017 07:48 AM, Prateek Sood wrote:
>> Remove circular dependency deadlock in a scenario where hotplug of CPU is
>> being done while there is updation in cgroup and cpuset triggered from
>> userspace.
>>
>>
- lock(cpuset_mutex) [held]
lock(cpuhotplug.mutex) [waiting]
Eliminate this dependecy by reordering locking of cpuset_mutex
and cpuhotplug.mutex in following order
1. Acquire cpuhotplug.mutex
2. Acquire cpuset_mutex
Signed-off-by: Prateek Sood
---
kernel/cgroup/c
On 09/05/2017 06:52 PM, Tejun Heo wrote:
> Hello,
>
> On Thu, Aug 31, 2017 at 06:43:56PM +0530, Prateek Sood wrote:
>>> 6) cpuset_mutex is acquired by task init:1 and is waiting for cpuhotplug
>>> lock.
>
> Yeah, this is the problematic one.
>
>>> W
On 08/30/2017 07:28 PM, Prateek Sood wrote:
> Hi,
>
> While using Linux version 4.4 on my setup, I have observed a deadlock.
>
> 1) CPU3 is getting hot plugged from a worker thread(kworker/0:0) on CPU0.
> 2) Cpu hot plug flow needs to flush the work items on hot plugging CPU3
Hi,
While using Linux version 4.4 on my setup, I have observed a deadlock.
1) CPU3 is getting hot plugged from a worker thread(kworker/0:0) on CPU0.
2) Cpu hot plug flow needs to flush the work items on hot plugging CPU3,
with a high priority worker from the corresponding CPU(cpu3) worker pool
ke sure that the spinner state is
consulted after sem->count is updated in up_write context.
Signed-off-by: Prateek Sood
---
kernel/locking/rwsem-xadd.c | 45 +
1 file changed, 45 insertions(+)
diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locki
Commit-ID: 50972fe78f24f1cd0b9d7bbf1f87d2be9e4f412e
Gitweb: http://git.kernel.org/tip/50972fe78f24f1cd0b9d7bbf1f87d2be9e4f412e
Author: Prateek Sood
AuthorDate: Fri, 14 Jul 2017 19:17:56 +0530
Committer: Ingo Molnar
CommitDate: Thu, 10 Aug 2017 12:28:54 +0200
locking/osq_lock: Fix
On 07/31/2017 10:54 PM, Prateek Sood wrote:
> Fix ordering of link creation between node->prev and prev->next in
> osq_lock(). A case in which the status of optimistic spin queue is
> CPU6->CPU2 in which CPU6 has acquired the lock.
>
> tail
> v
&
ntly,
tail
v
,-. <- ,-. <- ,-.
|6||2||0|
`-'`-'`-'
`--^
so if CP
sulted after sem->count is updated in up_write context.
Signed-off-by: Prateek Sood
---
kernel/locking/rwsem-xadd.c | 34 ++
1 file changed, 34 insertions(+)
diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
index 34e727f..21c111a 1006
is available
or need_resched is set. For RT task, need_resched will not be set. Task T3
will not be able to bail out of the infinite loop.
Signed-off-by: Prateek Sood
---
kernel/locking/osq_lock.c | 9 -
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/kernel/locking
prev is committed resulting in change of CPU0 prev back to
CPU2 node. CPU2 node->next is NULL currently, so if CPU0 gets into unqueue
path of osq_lock it will keep spinning in infinite loop as condition
prev->next == node will never be true.
Signed-off-by: Prateek Sood
---
kernel/locking/o
69 matches
Mail list logo