On 2021/4/8 22:55, riteshh wrote:
> On 21/04/08 09:08AM, Miaohe Lin wrote:
>> Hi all,
>> When I was investigating the swap code, I found some possible race
>> windows. This series aims to fix all these races. But using current
>> get/put_swap_device() to guard against concurrent swapoff for
>> swap_readpage() looks terrible because swap_readpage() may take really
>> long time. And to reduce the performance overhead on the hot-path as
>> much as possible, it appears we can use the percpu_ref to close this
>> race window(as suggested by Huang, Ying). The patch 1 adds percpu_ref
>> support for swap and the rest of the patches use this to close various
>> race windows. More details can be found in the respective changelogs.
>> Thanks!
>>
>> Miaohe Lin (5):
>>   mm/swapfile: add percpu_ref support for swap
>>   swap: fix do_swap_page() race with swapoff
>>   mm/swap_state: fix get_shadow_from_swap_cache() race with swapoff
>>   mm/swap_state: fix potential faulted in race in swap_ra_info()
>>   mm/swap_state: fix swap_cluster_readahead() race with swapoff
> 

Many thanks for quick respond.

> Somehow I see Patch-1 and Patch-2 are missing on linux-mm[1].

I have no idea why Patch-1 and Patch-2 are missing. But they could be found at:
https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg2542188.html

> Also I wanted to ask if you have a way to trigger this in a more controlled
> environment (consistently)?
> 

This is *theoretical* issue. The race window is very small but not impossible.
Please see the discussion:
https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg2530094.html

> [1]: 
> https://patchwork.kernel.org/project/linux-mm/cover/20210408130820.48233-1-linmia...@huawei.com/
> 

Thanks again.

> -ritesh
> 
>>
>>  include/linux/swap.h |  4 +++-
>>  mm/memory.c          | 10 +++++++++
>>  mm/swap_state.c      | 33 +++++++++++++++++++++--------
>>  mm/swapfile.c        | 50 +++++++++++++++++++++++++++-----------------
>>  4 files changed, 68 insertions(+), 29 deletions(-)
>>
>> --
>> 2.19.1
>>
>>
> .
> 

Reply via email to