Re: [PATCH v3 3/9] PM / QoS: Fix constraints alloc vs reclaim locking

2023-08-04 Thread Rafael J. Wysocki
On Fri, Aug 4, 2023 at 10:38 PM Rob Clark  wrote:
>
> On Fri, Aug 4, 2023 at 12:11 PM Rafael J. Wysocki  wrote:
> >
> > On Fri, Aug 4, 2023 at 8:38 PM Rob Clark  wrote:
> > >
> > > On Fri, Aug 4, 2023 at 10:07 AM Rafael J. Wysocki  
> > > wrote:
> > > >
> > > > On Fri, Aug 4, 2023 at 12:02 AM Rob Clark  wrote:
> > > > >
> > > > > From: Rob Clark 
> > > > >
> > > > > In the process of adding lockdep annotation for drm GPU scheduler's
> > > > > job_run() to detect potential deadlock against shrinker/reclaim, I hit
> > > > > this lockdep splat:
> > > > >
> > > > >==
> > > > >WARNING: possible circular locking dependency detected
> > > > >6.2.0-rc8-debug+ #558 Tainted: GW
> > > > >--
> > > > >ring0/125 is trying to acquire lock:
> > > > >ffd6d6ce0f28 (dev_pm_qos_mtx){+.+.}-{3:3}, at: 
> > > > > dev_pm_qos_update_request+0x38/0x68
> > > > >
> > > > >but task is already holding lock:
> > > > >ff8087239208 (>active_lock){+.+.}-{3:3}, at: 
> > > > > msm_gpu_submit+0xec/0x178
> > > > >
> > > > >which lock already depends on the new lock.
> > > > >
> > > > >the existing dependency chain (in reverse order) is:
> > > > >
> > > > >-> #4 (>active_lock){+.+.}-{3:3}:
> > > > >   __mutex_lock+0xcc/0x3c8
> > > > >   mutex_lock_nested+0x30/0x44
> > > > >   msm_gpu_submit+0xec/0x178
> > > > >   msm_job_run+0x78/0x150
> > > > >   drm_sched_main+0x290/0x370
> > > > >   kthread+0xf0/0x100
> > > > >   ret_from_fork+0x10/0x20
> > > > >
> > > > >-> #3 (dma_fence_map){}-{0:0}:
> > > > >   __dma_fence_might_wait+0x74/0xc0
> > > > >   dma_resv_lockdep+0x1f4/0x2f4
> > > > >   do_one_initcall+0x104/0x2bc
> > > > >   kernel_init_freeable+0x344/0x34c
> > > > >   kernel_init+0x30/0x134
> > > > >   ret_from_fork+0x10/0x20
> > > > >
> > > > >-> #2 (mmu_notifier_invalidate_range_start){+.+.}-{0:0}:
> > > > >   fs_reclaim_acquire+0x80/0xa8
> > > > >   slab_pre_alloc_hook.constprop.0+0x40/0x25c
> > > > >   __kmem_cache_alloc_node+0x60/0x1cc
> > > > >   __kmalloc+0xd8/0x100
> > > > >   topology_parse_cpu_capacity+0x8c/0x178
> > > > >   get_cpu_for_node+0x88/0xc4
> > > > >   parse_cluster+0x1b0/0x28c
> > > > >   parse_cluster+0x8c/0x28c
> > > > >   init_cpu_topology+0x168/0x188
> > > > >   smp_prepare_cpus+0x24/0xf8
> > > > >   kernel_init_freeable+0x18c/0x34c
> > > > >   kernel_init+0x30/0x134
> > > > >   ret_from_fork+0x10/0x20
> > > > >
> > > > >-> #1 (fs_reclaim){+.+.}-{0:0}:
> > > > >   __fs_reclaim_acquire+0x3c/0x48
> > > > >   fs_reclaim_acquire+0x54/0xa8
> > > > >   slab_pre_alloc_hook.constprop.0+0x40/0x25c
> > > > >   __kmem_cache_alloc_node+0x60/0x1cc
> > > > >   kmalloc_trace+0x50/0xa8
> > > > >   dev_pm_qos_constraints_allocate+0x38/0x100
> > > > >   __dev_pm_qos_add_request+0xb0/0x1e8
> > > > >   dev_pm_qos_add_request+0x58/0x80
> > > > >   dev_pm_qos_expose_latency_limit+0x60/0x13c
> > > > >   register_cpu+0x12c/0x130
> > > > >   topology_init+0xac/0xbc
> > > > >   do_one_initcall+0x104/0x2bc
> > > > >   kernel_init_freeable+0x344/0x34c
> > > > >   kernel_init+0x30/0x134
> > > > >   ret_from_fork+0x10/0x20
> > > > >
> > > > >-> #0 (dev_pm_qos_mtx){+.+.}-{3:3}:
> > > > >   __lock_acquire+0xe00/0x1060
> > > > >   lock_acquire+0x1e0/0x2f8
> > > > >   __mutex_lock+0xcc/0x3c8
> > > > >   mutex_lock_nested+0x30/0x44
> > > > >   dev_pm_qos_update_request+0x38/0x68
> > > > >   msm_devfreq_boost+0x40/0x70
> > > > >   msm_devfreq_active+0xc0/0xf0
> > > > >   msm_gpu_submit+0x10c/0x178
> > > > >   msm_job_run+0x78/0x150
> > > > >   drm_sched_main+0x290/0x370
> > > > >   kthread+0xf0/0x100
> > > > >   ret_from_fork+0x10/0x20
> > > > >
> > > > >other info that might help us debug this:
> > > > >
> > > > >Chain exists of:
> > > > >  dev_pm_qos_mtx --> dma_fence_map --> >active_lock
> > > > >
> > > > > Possible unsafe locking scenario:
> > > > >
> > > > >   CPU0CPU1
> > > > >   
> > > > >  lock(>active_lock);
> > > > >   lock(dma_fence_map);
> > > > >   lock(>active_lock);
> > > > >  lock(dev_pm_qos_mtx);
> > > > >
> > > > > *** DEADLOCK ***
> > > > >
> > > > >3 locks held by ring0/123:
> > > > > #0: ff8087251170 (>lock){+.+.}-{3:3}, at: 
> > > > > msm_job_run+0x64/0x150
> > > > > #1: ffd00b0e57e8 (dma_fence_map){}-{0:0}, at: 
> > > > > msm_job_run+0x68/0x150
> > > > > 

Re: [PATCH v3 3/9] PM / QoS: Fix constraints alloc vs reclaim locking

2023-08-04 Thread Rob Clark
On Fri, Aug 4, 2023 at 12:11 PM Rafael J. Wysocki  wrote:
>
> On Fri, Aug 4, 2023 at 8:38 PM Rob Clark  wrote:
> >
> > On Fri, Aug 4, 2023 at 10:07 AM Rafael J. Wysocki  wrote:
> > >
> > > On Fri, Aug 4, 2023 at 12:02 AM Rob Clark  wrote:
> > > >
> > > > From: Rob Clark 
> > > >
> > > > In the process of adding lockdep annotation for drm GPU scheduler's
> > > > job_run() to detect potential deadlock against shrinker/reclaim, I hit
> > > > this lockdep splat:
> > > >
> > > >==
> > > >WARNING: possible circular locking dependency detected
> > > >6.2.0-rc8-debug+ #558 Tainted: GW
> > > >--
> > > >ring0/125 is trying to acquire lock:
> > > >ffd6d6ce0f28 (dev_pm_qos_mtx){+.+.}-{3:3}, at: 
> > > > dev_pm_qos_update_request+0x38/0x68
> > > >
> > > >but task is already holding lock:
> > > >ff8087239208 (>active_lock){+.+.}-{3:3}, at: 
> > > > msm_gpu_submit+0xec/0x178
> > > >
> > > >which lock already depends on the new lock.
> > > >
> > > >the existing dependency chain (in reverse order) is:
> > > >
> > > >-> #4 (>active_lock){+.+.}-{3:3}:
> > > >   __mutex_lock+0xcc/0x3c8
> > > >   mutex_lock_nested+0x30/0x44
> > > >   msm_gpu_submit+0xec/0x178
> > > >   msm_job_run+0x78/0x150
> > > >   drm_sched_main+0x290/0x370
> > > >   kthread+0xf0/0x100
> > > >   ret_from_fork+0x10/0x20
> > > >
> > > >-> #3 (dma_fence_map){}-{0:0}:
> > > >   __dma_fence_might_wait+0x74/0xc0
> > > >   dma_resv_lockdep+0x1f4/0x2f4
> > > >   do_one_initcall+0x104/0x2bc
> > > >   kernel_init_freeable+0x344/0x34c
> > > >   kernel_init+0x30/0x134
> > > >   ret_from_fork+0x10/0x20
> > > >
> > > >-> #2 (mmu_notifier_invalidate_range_start){+.+.}-{0:0}:
> > > >   fs_reclaim_acquire+0x80/0xa8
> > > >   slab_pre_alloc_hook.constprop.0+0x40/0x25c
> > > >   __kmem_cache_alloc_node+0x60/0x1cc
> > > >   __kmalloc+0xd8/0x100
> > > >   topology_parse_cpu_capacity+0x8c/0x178
> > > >   get_cpu_for_node+0x88/0xc4
> > > >   parse_cluster+0x1b0/0x28c
> > > >   parse_cluster+0x8c/0x28c
> > > >   init_cpu_topology+0x168/0x188
> > > >   smp_prepare_cpus+0x24/0xf8
> > > >   kernel_init_freeable+0x18c/0x34c
> > > >   kernel_init+0x30/0x134
> > > >   ret_from_fork+0x10/0x20
> > > >
> > > >-> #1 (fs_reclaim){+.+.}-{0:0}:
> > > >   __fs_reclaim_acquire+0x3c/0x48
> > > >   fs_reclaim_acquire+0x54/0xa8
> > > >   slab_pre_alloc_hook.constprop.0+0x40/0x25c
> > > >   __kmem_cache_alloc_node+0x60/0x1cc
> > > >   kmalloc_trace+0x50/0xa8
> > > >   dev_pm_qos_constraints_allocate+0x38/0x100
> > > >   __dev_pm_qos_add_request+0xb0/0x1e8
> > > >   dev_pm_qos_add_request+0x58/0x80
> > > >   dev_pm_qos_expose_latency_limit+0x60/0x13c
> > > >   register_cpu+0x12c/0x130
> > > >   topology_init+0xac/0xbc
> > > >   do_one_initcall+0x104/0x2bc
> > > >   kernel_init_freeable+0x344/0x34c
> > > >   kernel_init+0x30/0x134
> > > >   ret_from_fork+0x10/0x20
> > > >
> > > >-> #0 (dev_pm_qos_mtx){+.+.}-{3:3}:
> > > >   __lock_acquire+0xe00/0x1060
> > > >   lock_acquire+0x1e0/0x2f8
> > > >   __mutex_lock+0xcc/0x3c8
> > > >   mutex_lock_nested+0x30/0x44
> > > >   dev_pm_qos_update_request+0x38/0x68
> > > >   msm_devfreq_boost+0x40/0x70
> > > >   msm_devfreq_active+0xc0/0xf0
> > > >   msm_gpu_submit+0x10c/0x178
> > > >   msm_job_run+0x78/0x150
> > > >   drm_sched_main+0x290/0x370
> > > >   kthread+0xf0/0x100
> > > >   ret_from_fork+0x10/0x20
> > > >
> > > >other info that might help us debug this:
> > > >
> > > >Chain exists of:
> > > >  dev_pm_qos_mtx --> dma_fence_map --> >active_lock
> > > >
> > > > Possible unsafe locking scenario:
> > > >
> > > >   CPU0CPU1
> > > >   
> > > >  lock(>active_lock);
> > > >   lock(dma_fence_map);
> > > >   lock(>active_lock);
> > > >  lock(dev_pm_qos_mtx);
> > > >
> > > > *** DEADLOCK ***
> > > >
> > > >3 locks held by ring0/123:
> > > > #0: ff8087251170 (>lock){+.+.}-{3:3}, at: 
> > > > msm_job_run+0x64/0x150
> > > > #1: ffd00b0e57e8 (dma_fence_map){}-{0:0}, at: 
> > > > msm_job_run+0x68/0x150
> > > > #2: ff8087251208 (>active_lock){+.+.}-{3:3}, at: 
> > > > msm_gpu_submit+0xec/0x178
> > > >
> > > >stack backtrace:
> > > >CPU: 6 PID: 123 Comm: ring0 Not tainted 6.2.0-rc8-debug+ #559
> > > >Hardware name: Google Lazor (rev1 - 2) with LTE (DT)
> > > >Call trace:
> > 

Re: [PATCH v3 3/9] PM / QoS: Fix constraints alloc vs reclaim locking

2023-08-04 Thread Rafael J. Wysocki
On Fri, Aug 4, 2023 at 8:38 PM Rob Clark  wrote:
>
> On Fri, Aug 4, 2023 at 10:07 AM Rafael J. Wysocki  wrote:
> >
> > On Fri, Aug 4, 2023 at 12:02 AM Rob Clark  wrote:
> > >
> > > From: Rob Clark 
> > >
> > > In the process of adding lockdep annotation for drm GPU scheduler's
> > > job_run() to detect potential deadlock against shrinker/reclaim, I hit
> > > this lockdep splat:
> > >
> > >==
> > >WARNING: possible circular locking dependency detected
> > >6.2.0-rc8-debug+ #558 Tainted: GW
> > >--
> > >ring0/125 is trying to acquire lock:
> > >ffd6d6ce0f28 (dev_pm_qos_mtx){+.+.}-{3:3}, at: 
> > > dev_pm_qos_update_request+0x38/0x68
> > >
> > >but task is already holding lock:
> > >ff8087239208 (>active_lock){+.+.}-{3:3}, at: 
> > > msm_gpu_submit+0xec/0x178
> > >
> > >which lock already depends on the new lock.
> > >
> > >the existing dependency chain (in reverse order) is:
> > >
> > >-> #4 (>active_lock){+.+.}-{3:3}:
> > >   __mutex_lock+0xcc/0x3c8
> > >   mutex_lock_nested+0x30/0x44
> > >   msm_gpu_submit+0xec/0x178
> > >   msm_job_run+0x78/0x150
> > >   drm_sched_main+0x290/0x370
> > >   kthread+0xf0/0x100
> > >   ret_from_fork+0x10/0x20
> > >
> > >-> #3 (dma_fence_map){}-{0:0}:
> > >   __dma_fence_might_wait+0x74/0xc0
> > >   dma_resv_lockdep+0x1f4/0x2f4
> > >   do_one_initcall+0x104/0x2bc
> > >   kernel_init_freeable+0x344/0x34c
> > >   kernel_init+0x30/0x134
> > >   ret_from_fork+0x10/0x20
> > >
> > >-> #2 (mmu_notifier_invalidate_range_start){+.+.}-{0:0}:
> > >   fs_reclaim_acquire+0x80/0xa8
> > >   slab_pre_alloc_hook.constprop.0+0x40/0x25c
> > >   __kmem_cache_alloc_node+0x60/0x1cc
> > >   __kmalloc+0xd8/0x100
> > >   topology_parse_cpu_capacity+0x8c/0x178
> > >   get_cpu_for_node+0x88/0xc4
> > >   parse_cluster+0x1b0/0x28c
> > >   parse_cluster+0x8c/0x28c
> > >   init_cpu_topology+0x168/0x188
> > >   smp_prepare_cpus+0x24/0xf8
> > >   kernel_init_freeable+0x18c/0x34c
> > >   kernel_init+0x30/0x134
> > >   ret_from_fork+0x10/0x20
> > >
> > >-> #1 (fs_reclaim){+.+.}-{0:0}:
> > >   __fs_reclaim_acquire+0x3c/0x48
> > >   fs_reclaim_acquire+0x54/0xa8
> > >   slab_pre_alloc_hook.constprop.0+0x40/0x25c
> > >   __kmem_cache_alloc_node+0x60/0x1cc
> > >   kmalloc_trace+0x50/0xa8
> > >   dev_pm_qos_constraints_allocate+0x38/0x100
> > >   __dev_pm_qos_add_request+0xb0/0x1e8
> > >   dev_pm_qos_add_request+0x58/0x80
> > >   dev_pm_qos_expose_latency_limit+0x60/0x13c
> > >   register_cpu+0x12c/0x130
> > >   topology_init+0xac/0xbc
> > >   do_one_initcall+0x104/0x2bc
> > >   kernel_init_freeable+0x344/0x34c
> > >   kernel_init+0x30/0x134
> > >   ret_from_fork+0x10/0x20
> > >
> > >-> #0 (dev_pm_qos_mtx){+.+.}-{3:3}:
> > >   __lock_acquire+0xe00/0x1060
> > >   lock_acquire+0x1e0/0x2f8
> > >   __mutex_lock+0xcc/0x3c8
> > >   mutex_lock_nested+0x30/0x44
> > >   dev_pm_qos_update_request+0x38/0x68
> > >   msm_devfreq_boost+0x40/0x70
> > >   msm_devfreq_active+0xc0/0xf0
> > >   msm_gpu_submit+0x10c/0x178
> > >   msm_job_run+0x78/0x150
> > >   drm_sched_main+0x290/0x370
> > >   kthread+0xf0/0x100
> > >   ret_from_fork+0x10/0x20
> > >
> > >other info that might help us debug this:
> > >
> > >Chain exists of:
> > >  dev_pm_qos_mtx --> dma_fence_map --> >active_lock
> > >
> > > Possible unsafe locking scenario:
> > >
> > >   CPU0CPU1
> > >   
> > >  lock(>active_lock);
> > >   lock(dma_fence_map);
> > >   lock(>active_lock);
> > >  lock(dev_pm_qos_mtx);
> > >
> > > *** DEADLOCK ***
> > >
> > >3 locks held by ring0/123:
> > > #0: ff8087251170 (>lock){+.+.}-{3:3}, at: 
> > > msm_job_run+0x64/0x150
> > > #1: ffd00b0e57e8 (dma_fence_map){}-{0:0}, at: 
> > > msm_job_run+0x68/0x150
> > > #2: ff8087251208 (>active_lock){+.+.}-{3:3}, at: 
> > > msm_gpu_submit+0xec/0x178
> > >
> > >stack backtrace:
> > >CPU: 6 PID: 123 Comm: ring0 Not tainted 6.2.0-rc8-debug+ #559
> > >Hardware name: Google Lazor (rev1 - 2) with LTE (DT)
> > >Call trace:
> > > dump_backtrace.part.0+0xb4/0xf8
> > > show_stack+0x20/0x38
> > > dump_stack_lvl+0x9c/0xd0
> > > dump_stack+0x18/0x34
> > > print_circular_bug+0x1b4/0x1f0
> > > check_noncircular+0x78/0xac
> > > __lock_acquire+0xe00/0x1060
> > > lock_acquire+0x1e0/0x2f8
> > > 

Re: [PATCH v3 3/9] PM / QoS: Fix constraints alloc vs reclaim locking

2023-08-04 Thread Rob Clark
On Fri, Aug 4, 2023 at 10:07 AM Rafael J. Wysocki  wrote:
>
> On Fri, Aug 4, 2023 at 12:02 AM Rob Clark  wrote:
> >
> > From: Rob Clark 
> >
> > In the process of adding lockdep annotation for drm GPU scheduler's
> > job_run() to detect potential deadlock against shrinker/reclaim, I hit
> > this lockdep splat:
> >
> >==
> >WARNING: possible circular locking dependency detected
> >6.2.0-rc8-debug+ #558 Tainted: GW
> >--
> >ring0/125 is trying to acquire lock:
> >ffd6d6ce0f28 (dev_pm_qos_mtx){+.+.}-{3:3}, at: 
> > dev_pm_qos_update_request+0x38/0x68
> >
> >but task is already holding lock:
> >ff8087239208 (>active_lock){+.+.}-{3:3}, at: 
> > msm_gpu_submit+0xec/0x178
> >
> >which lock already depends on the new lock.
> >
> >the existing dependency chain (in reverse order) is:
> >
> >-> #4 (>active_lock){+.+.}-{3:3}:
> >   __mutex_lock+0xcc/0x3c8
> >   mutex_lock_nested+0x30/0x44
> >   msm_gpu_submit+0xec/0x178
> >   msm_job_run+0x78/0x150
> >   drm_sched_main+0x290/0x370
> >   kthread+0xf0/0x100
> >   ret_from_fork+0x10/0x20
> >
> >-> #3 (dma_fence_map){}-{0:0}:
> >   __dma_fence_might_wait+0x74/0xc0
> >   dma_resv_lockdep+0x1f4/0x2f4
> >   do_one_initcall+0x104/0x2bc
> >   kernel_init_freeable+0x344/0x34c
> >   kernel_init+0x30/0x134
> >   ret_from_fork+0x10/0x20
> >
> >-> #2 (mmu_notifier_invalidate_range_start){+.+.}-{0:0}:
> >   fs_reclaim_acquire+0x80/0xa8
> >   slab_pre_alloc_hook.constprop.0+0x40/0x25c
> >   __kmem_cache_alloc_node+0x60/0x1cc
> >   __kmalloc+0xd8/0x100
> >   topology_parse_cpu_capacity+0x8c/0x178
> >   get_cpu_for_node+0x88/0xc4
> >   parse_cluster+0x1b0/0x28c
> >   parse_cluster+0x8c/0x28c
> >   init_cpu_topology+0x168/0x188
> >   smp_prepare_cpus+0x24/0xf8
> >   kernel_init_freeable+0x18c/0x34c
> >   kernel_init+0x30/0x134
> >   ret_from_fork+0x10/0x20
> >
> >-> #1 (fs_reclaim){+.+.}-{0:0}:
> >   __fs_reclaim_acquire+0x3c/0x48
> >   fs_reclaim_acquire+0x54/0xa8
> >   slab_pre_alloc_hook.constprop.0+0x40/0x25c
> >   __kmem_cache_alloc_node+0x60/0x1cc
> >   kmalloc_trace+0x50/0xa8
> >   dev_pm_qos_constraints_allocate+0x38/0x100
> >   __dev_pm_qos_add_request+0xb0/0x1e8
> >   dev_pm_qos_add_request+0x58/0x80
> >   dev_pm_qos_expose_latency_limit+0x60/0x13c
> >   register_cpu+0x12c/0x130
> >   topology_init+0xac/0xbc
> >   do_one_initcall+0x104/0x2bc
> >   kernel_init_freeable+0x344/0x34c
> >   kernel_init+0x30/0x134
> >   ret_from_fork+0x10/0x20
> >
> >-> #0 (dev_pm_qos_mtx){+.+.}-{3:3}:
> >   __lock_acquire+0xe00/0x1060
> >   lock_acquire+0x1e0/0x2f8
> >   __mutex_lock+0xcc/0x3c8
> >   mutex_lock_nested+0x30/0x44
> >   dev_pm_qos_update_request+0x38/0x68
> >   msm_devfreq_boost+0x40/0x70
> >   msm_devfreq_active+0xc0/0xf0
> >   msm_gpu_submit+0x10c/0x178
> >   msm_job_run+0x78/0x150
> >   drm_sched_main+0x290/0x370
> >   kthread+0xf0/0x100
> >   ret_from_fork+0x10/0x20
> >
> >other info that might help us debug this:
> >
> >Chain exists of:
> >  dev_pm_qos_mtx --> dma_fence_map --> >active_lock
> >
> > Possible unsafe locking scenario:
> >
> >   CPU0CPU1
> >   
> >  lock(>active_lock);
> >   lock(dma_fence_map);
> >   lock(>active_lock);
> >  lock(dev_pm_qos_mtx);
> >
> > *** DEADLOCK ***
> >
> >3 locks held by ring0/123:
> > #0: ff8087251170 (>lock){+.+.}-{3:3}, at: 
> > msm_job_run+0x64/0x150
> > #1: ffd00b0e57e8 (dma_fence_map){}-{0:0}, at: 
> > msm_job_run+0x68/0x150
> > #2: ff8087251208 (>active_lock){+.+.}-{3:3}, at: 
> > msm_gpu_submit+0xec/0x178
> >
> >stack backtrace:
> >CPU: 6 PID: 123 Comm: ring0 Not tainted 6.2.0-rc8-debug+ #559
> >Hardware name: Google Lazor (rev1 - 2) with LTE (DT)
> >Call trace:
> > dump_backtrace.part.0+0xb4/0xf8
> > show_stack+0x20/0x38
> > dump_stack_lvl+0x9c/0xd0
> > dump_stack+0x18/0x34
> > print_circular_bug+0x1b4/0x1f0
> > check_noncircular+0x78/0xac
> > __lock_acquire+0xe00/0x1060
> > lock_acquire+0x1e0/0x2f8
> > __mutex_lock+0xcc/0x3c8
> > mutex_lock_nested+0x30/0x44
> > dev_pm_qos_update_request+0x38/0x68
> > msm_devfreq_boost+0x40/0x70
> > msm_devfreq_active+0xc0/0xf0
> > msm_gpu_submit+0x10c/0x178
> > msm_job_run+0x78/0x150
> > drm_sched_main+0x290/0x370
> > kthread+0xf0/0x100

Re: [PATCH v3 3/9] PM / QoS: Fix constraints alloc vs reclaim locking

2023-08-04 Thread Rafael J. Wysocki
On Fri, Aug 4, 2023 at 12:02 AM Rob Clark  wrote:
>
> From: Rob Clark 
>
> In the process of adding lockdep annotation for drm GPU scheduler's
> job_run() to detect potential deadlock against shrinker/reclaim, I hit
> this lockdep splat:
>
>==
>WARNING: possible circular locking dependency detected
>6.2.0-rc8-debug+ #558 Tainted: GW
>--
>ring0/125 is trying to acquire lock:
>ffd6d6ce0f28 (dev_pm_qos_mtx){+.+.}-{3:3}, at: 
> dev_pm_qos_update_request+0x38/0x68
>
>but task is already holding lock:
>ff8087239208 (>active_lock){+.+.}-{3:3}, at: 
> msm_gpu_submit+0xec/0x178
>
>which lock already depends on the new lock.
>
>the existing dependency chain (in reverse order) is:
>
>-> #4 (>active_lock){+.+.}-{3:3}:
>   __mutex_lock+0xcc/0x3c8
>   mutex_lock_nested+0x30/0x44
>   msm_gpu_submit+0xec/0x178
>   msm_job_run+0x78/0x150
>   drm_sched_main+0x290/0x370
>   kthread+0xf0/0x100
>   ret_from_fork+0x10/0x20
>
>-> #3 (dma_fence_map){}-{0:0}:
>   __dma_fence_might_wait+0x74/0xc0
>   dma_resv_lockdep+0x1f4/0x2f4
>   do_one_initcall+0x104/0x2bc
>   kernel_init_freeable+0x344/0x34c
>   kernel_init+0x30/0x134
>   ret_from_fork+0x10/0x20
>
>-> #2 (mmu_notifier_invalidate_range_start){+.+.}-{0:0}:
>   fs_reclaim_acquire+0x80/0xa8
>   slab_pre_alloc_hook.constprop.0+0x40/0x25c
>   __kmem_cache_alloc_node+0x60/0x1cc
>   __kmalloc+0xd8/0x100
>   topology_parse_cpu_capacity+0x8c/0x178
>   get_cpu_for_node+0x88/0xc4
>   parse_cluster+0x1b0/0x28c
>   parse_cluster+0x8c/0x28c
>   init_cpu_topology+0x168/0x188
>   smp_prepare_cpus+0x24/0xf8
>   kernel_init_freeable+0x18c/0x34c
>   kernel_init+0x30/0x134
>   ret_from_fork+0x10/0x20
>
>-> #1 (fs_reclaim){+.+.}-{0:0}:
>   __fs_reclaim_acquire+0x3c/0x48
>   fs_reclaim_acquire+0x54/0xa8
>   slab_pre_alloc_hook.constprop.0+0x40/0x25c
>   __kmem_cache_alloc_node+0x60/0x1cc
>   kmalloc_trace+0x50/0xa8
>   dev_pm_qos_constraints_allocate+0x38/0x100
>   __dev_pm_qos_add_request+0xb0/0x1e8
>   dev_pm_qos_add_request+0x58/0x80
>   dev_pm_qos_expose_latency_limit+0x60/0x13c
>   register_cpu+0x12c/0x130
>   topology_init+0xac/0xbc
>   do_one_initcall+0x104/0x2bc
>   kernel_init_freeable+0x344/0x34c
>   kernel_init+0x30/0x134
>   ret_from_fork+0x10/0x20
>
>-> #0 (dev_pm_qos_mtx){+.+.}-{3:3}:
>   __lock_acquire+0xe00/0x1060
>   lock_acquire+0x1e0/0x2f8
>   __mutex_lock+0xcc/0x3c8
>   mutex_lock_nested+0x30/0x44
>   dev_pm_qos_update_request+0x38/0x68
>   msm_devfreq_boost+0x40/0x70
>   msm_devfreq_active+0xc0/0xf0
>   msm_gpu_submit+0x10c/0x178
>   msm_job_run+0x78/0x150
>   drm_sched_main+0x290/0x370
>   kthread+0xf0/0x100
>   ret_from_fork+0x10/0x20
>
>other info that might help us debug this:
>
>Chain exists of:
>  dev_pm_qos_mtx --> dma_fence_map --> >active_lock
>
> Possible unsafe locking scenario:
>
>   CPU0CPU1
>   
>  lock(>active_lock);
>   lock(dma_fence_map);
>   lock(>active_lock);
>  lock(dev_pm_qos_mtx);
>
> *** DEADLOCK ***
>
>3 locks held by ring0/123:
> #0: ff8087251170 (>lock){+.+.}-{3:3}, at: msm_job_run+0x64/0x150
> #1: ffd00b0e57e8 (dma_fence_map){}-{0:0}, at: 
> msm_job_run+0x68/0x150
> #2: ff8087251208 (>active_lock){+.+.}-{3:3}, at: 
> msm_gpu_submit+0xec/0x178
>
>stack backtrace:
>CPU: 6 PID: 123 Comm: ring0 Not tainted 6.2.0-rc8-debug+ #559
>Hardware name: Google Lazor (rev1 - 2) with LTE (DT)
>Call trace:
> dump_backtrace.part.0+0xb4/0xf8
> show_stack+0x20/0x38
> dump_stack_lvl+0x9c/0xd0
> dump_stack+0x18/0x34
> print_circular_bug+0x1b4/0x1f0
> check_noncircular+0x78/0xac
> __lock_acquire+0xe00/0x1060
> lock_acquire+0x1e0/0x2f8
> __mutex_lock+0xcc/0x3c8
> mutex_lock_nested+0x30/0x44
> dev_pm_qos_update_request+0x38/0x68
> msm_devfreq_boost+0x40/0x70
> msm_devfreq_active+0xc0/0xf0
> msm_gpu_submit+0x10c/0x178
> msm_job_run+0x78/0x150
> drm_sched_main+0x290/0x370
> kthread+0xf0/0x100
> ret_from_fork+0x10/0x20
>
> The issue is that dev_pm_qos_mtx is held in the runpm suspend/resume (or
> freq change) path, but it is also held across allocations that could
> recurse into shrinker.
>
> Solve this by changing dev_pm_qos_constraints_allocate() into a function
> that can be called unconditionally before 

[PATCH v3 3/9] PM / QoS: Fix constraints alloc vs reclaim locking

2023-08-03 Thread Rob Clark
From: Rob Clark 

In the process of adding lockdep annotation for drm GPU scheduler's
job_run() to detect potential deadlock against shrinker/reclaim, I hit
this lockdep splat:

   ==
   WARNING: possible circular locking dependency detected
   6.2.0-rc8-debug+ #558 Tainted: GW
   --
   ring0/125 is trying to acquire lock:
   ffd6d6ce0f28 (dev_pm_qos_mtx){+.+.}-{3:3}, at: 
dev_pm_qos_update_request+0x38/0x68

   but task is already holding lock:
   ff8087239208 (>active_lock){+.+.}-{3:3}, at: 
msm_gpu_submit+0xec/0x178

   which lock already depends on the new lock.

   the existing dependency chain (in reverse order) is:

   -> #4 (>active_lock){+.+.}-{3:3}:
  __mutex_lock+0xcc/0x3c8
  mutex_lock_nested+0x30/0x44
  msm_gpu_submit+0xec/0x178
  msm_job_run+0x78/0x150
  drm_sched_main+0x290/0x370
  kthread+0xf0/0x100
  ret_from_fork+0x10/0x20

   -> #3 (dma_fence_map){}-{0:0}:
  __dma_fence_might_wait+0x74/0xc0
  dma_resv_lockdep+0x1f4/0x2f4
  do_one_initcall+0x104/0x2bc
  kernel_init_freeable+0x344/0x34c
  kernel_init+0x30/0x134
  ret_from_fork+0x10/0x20

   -> #2 (mmu_notifier_invalidate_range_start){+.+.}-{0:0}:
  fs_reclaim_acquire+0x80/0xa8
  slab_pre_alloc_hook.constprop.0+0x40/0x25c
  __kmem_cache_alloc_node+0x60/0x1cc
  __kmalloc+0xd8/0x100
  topology_parse_cpu_capacity+0x8c/0x178
  get_cpu_for_node+0x88/0xc4
  parse_cluster+0x1b0/0x28c
  parse_cluster+0x8c/0x28c
  init_cpu_topology+0x168/0x188
  smp_prepare_cpus+0x24/0xf8
  kernel_init_freeable+0x18c/0x34c
  kernel_init+0x30/0x134
  ret_from_fork+0x10/0x20

   -> #1 (fs_reclaim){+.+.}-{0:0}:
  __fs_reclaim_acquire+0x3c/0x48
  fs_reclaim_acquire+0x54/0xa8
  slab_pre_alloc_hook.constprop.0+0x40/0x25c
  __kmem_cache_alloc_node+0x60/0x1cc
  kmalloc_trace+0x50/0xa8
  dev_pm_qos_constraints_allocate+0x38/0x100
  __dev_pm_qos_add_request+0xb0/0x1e8
  dev_pm_qos_add_request+0x58/0x80
  dev_pm_qos_expose_latency_limit+0x60/0x13c
  register_cpu+0x12c/0x130
  topology_init+0xac/0xbc
  do_one_initcall+0x104/0x2bc
  kernel_init_freeable+0x344/0x34c
  kernel_init+0x30/0x134
  ret_from_fork+0x10/0x20

   -> #0 (dev_pm_qos_mtx){+.+.}-{3:3}:
  __lock_acquire+0xe00/0x1060
  lock_acquire+0x1e0/0x2f8
  __mutex_lock+0xcc/0x3c8
  mutex_lock_nested+0x30/0x44
  dev_pm_qos_update_request+0x38/0x68
  msm_devfreq_boost+0x40/0x70
  msm_devfreq_active+0xc0/0xf0
  msm_gpu_submit+0x10c/0x178
  msm_job_run+0x78/0x150
  drm_sched_main+0x290/0x370
  kthread+0xf0/0x100
  ret_from_fork+0x10/0x20

   other info that might help us debug this:

   Chain exists of:
 dev_pm_qos_mtx --> dma_fence_map --> >active_lock

Possible unsafe locking scenario:

  CPU0CPU1
  
 lock(>active_lock);
  lock(dma_fence_map);
  lock(>active_lock);
 lock(dev_pm_qos_mtx);

*** DEADLOCK ***

   3 locks held by ring0/123:
#0: ff8087251170 (>lock){+.+.}-{3:3}, at: msm_job_run+0x64/0x150
#1: ffd00b0e57e8 (dma_fence_map){}-{0:0}, at: msm_job_run+0x68/0x150
#2: ff8087251208 (>active_lock){+.+.}-{3:3}, at: 
msm_gpu_submit+0xec/0x178

   stack backtrace:
   CPU: 6 PID: 123 Comm: ring0 Not tainted 6.2.0-rc8-debug+ #559
   Hardware name: Google Lazor (rev1 - 2) with LTE (DT)
   Call trace:
dump_backtrace.part.0+0xb4/0xf8
show_stack+0x20/0x38
dump_stack_lvl+0x9c/0xd0
dump_stack+0x18/0x34
print_circular_bug+0x1b4/0x1f0
check_noncircular+0x78/0xac
__lock_acquire+0xe00/0x1060
lock_acquire+0x1e0/0x2f8
__mutex_lock+0xcc/0x3c8
mutex_lock_nested+0x30/0x44
dev_pm_qos_update_request+0x38/0x68
msm_devfreq_boost+0x40/0x70
msm_devfreq_active+0xc0/0xf0
msm_gpu_submit+0x10c/0x178
msm_job_run+0x78/0x150
drm_sched_main+0x290/0x370
kthread+0xf0/0x100
ret_from_fork+0x10/0x20

The issue is that dev_pm_qos_mtx is held in the runpm suspend/resume (or
freq change) path, but it is also held across allocations that could
recurse into shrinker.

Solve this by changing dev_pm_qos_constraints_allocate() into a function
that can be called unconditionally before the device qos object is
needed and before aquiring dev_pm_qos_mtx.  This way the allocations can
be done without holding the mutex.  In the case that we raced with
another thread to allocate the qos object, detect this *after* acquiring
the dev_pm_qos_mtx and simply free the redundant allocations.