On Fri, Jan 29, 2016 at 05:59:46AM -0500, Tejun Heo wrote:
> fca839c00a12 ("workqueue: warn if memory reclaim tries to flush
> !WQ_MEM_RECLAIM workqueue") implemented flush dependency warning which
> triggers if a PF_MEMALLOC task or WQ_MEM_RECLAIM workqueue tries to
> flush a !WQ_MEM_RECLAIM workquee.
> 
> This assumes that workqueues marked with WQ_MEM_RECLAIM sit in memory
> reclaim path and making it depend on something which may need more
> memory to make forward progress can lead to deadlocks.  Unfortunately,
> workqueues created with the legacy create*_workqueue() interface
> always have WQ_MEM_RECLAIM regardless of whether they are depended
> upon memory reclaim or not.  These spurious WQ_MEM_RECLAIM markings
> cause spurious triggering of the flush dependency checks.
> 
>   WARNING: CPU: 0 PID: 6 at kernel/workqueue.c:2361 
> check_flush_dependency+0x138/0x144()
>   workqueue: WQ_MEM_RECLAIM deferwq:deferred_probe_work_func is flushing 
> !WQ_MEM_RECLAIM events:lru_add_drain_per_cpu
>   ...
>   Workqueue: deferwq deferred_probe_work_func
>   [<c0017acc>] (unwind_backtrace) from [<c0013134>] (show_stack+0x10/0x14)
>   [<c0013134>] (show_stack) from [<c0245f18>] (dump_stack+0x94/0xd4)
>   [<c0245f18>] (dump_stack) from [<c0026f9c>] (warn_slowpath_common+0x80/0xb0)
>   [<c0026f9c>] (warn_slowpath_common) from [<c0026ffc>] 
> (warn_slowpath_fmt+0x30/0x40)
>   [<c0026ffc>] (warn_slowpath_fmt) from [<c00390b8>] 
> (check_flush_dependency+0x138/0x144)
>   [<c00390b8>] (check_flush_dependency) from [<c0039ca0>] 
> (flush_work+0x50/0x15c)
>   [<c0039ca0>] (flush_work) from [<c00c51b0>] (lru_add_drain_all+0x130/0x180)
>   [<c00c51b0>] (lru_add_drain_all) from [<c00f728c>] (migrate_prep+0x8/0x10)
>   [<c00f728c>] (migrate_prep) from [<c00bfbc4>] 
> (alloc_contig_range+0xd8/0x338)
>   [<c00bfbc4>] (alloc_contig_range) from [<c00f8f18>] (cma_alloc+0xe0/0x1ac)
>   [<c00f8f18>] (cma_alloc) from [<c001cac4>] 
> (__alloc_from_contiguous+0x38/0xd8)
>   [<c001cac4>] (__alloc_from_contiguous) from [<c001ceb4>] 
> (__dma_alloc+0x240/0x278)
>   [<c001ceb4>] (__dma_alloc) from [<c001cf78>] (arm_dma_alloc+0x54/0x5c)
>   [<c001cf78>] (arm_dma_alloc) from [<c0355ea4>] 
> (dmam_alloc_coherent+0xc0/0xec)
>   [<c0355ea4>] (dmam_alloc_coherent) from [<c039cc4c>] 
> (ahci_port_start+0x150/0x1dc)
>   [<c039cc4c>] (ahci_port_start) from [<c0384734>] 
> (ata_host_start.part.3+0xc8/0x1c8)
>   [<c0384734>] (ata_host_start.part.3) from [<c03898dc>] 
> (ata_host_activate+0x50/0x148)
>   [<c03898dc>] (ata_host_activate) from [<c039d558>] 
> (ahci_host_activate+0x44/0x114)
>   [<c039d558>] (ahci_host_activate) from [<c039f05c>] 
> (ahci_platform_init_host+0x1d8/0x3c8)
>   [<c039f05c>] (ahci_platform_init_host) from [<c039e6bc>] 
> (tegra_ahci_probe+0x448/0x4e8)
>   [<c039e6bc>] (tegra_ahci_probe) from [<c0347058>] 
> (platform_drv_probe+0x50/0xac)
>   [<c0347058>] (platform_drv_probe) from [<c03458cc>] 
> (driver_probe_device+0x214/0x2c0)
>   [<c03458cc>] (driver_probe_device) from [<c0343cc0>] 
> (bus_for_each_drv+0x60/0x94)
>   [<c0343cc0>] (bus_for_each_drv) from [<c03455d8>] 
> (__device_attach+0xb0/0x114)
>   [<c03455d8>] (__device_attach) from [<c0344ab8>] 
> (bus_probe_device+0x84/0x8c)
>   [<c0344ab8>] (bus_probe_device) from [<c0344f48>] 
> (deferred_probe_work_func+0x68/0x98)
>   [<c0344f48>] (deferred_probe_work_func) from [<c003b738>] 
> (process_one_work+0x120/0x3f8)
>   [<c003b738>] (process_one_work) from [<c003ba48>] (worker_thread+0x38/0x55c)
>   [<c003ba48>] (worker_thread) from [<c0040f14>] (kthread+0xdc/0xf4)
>   [<c0040f14>] (kthread) from [<c000f778>] (ret_from_fork+0x14/0x3c)
> 
> Fix it by marking workqueues created via create*_workqueue() with
> __WQ_LEGACY and disabling flush dependency checks on them.
> 
> Signed-off-by: Tejun Heo <t...@kernel.org>
> Reported-by: Thierry Reding <thierry.red...@gmail.com>
> Link: http://lkml.kernel.org/g/20160126173843.ga11...@ulmo.nvidia.com

Thanks for fixing this, everything is back to normal:

Tested-by: Thierry Reding <tred...@nvidia.com>

Attachment: signature.asc
Description: PGP signature

Reply via email to