https://bugzilla.kernel.org/show_bug.cgi?id=220575
--- Comment #16 from JY ([email protected]) --- (In reply to Chao Yu from comment #7) > Can you please hook fscrypt_free_bounce_page() to set page private w/ > special value, something as below: > > void fscrypt_free_bounce_page(struct page *bounce_page) > { > if (!bounce_page) > return; > set_page_private(bounce_page, (unsigned long)0xF2F52011); > ClearPagePrivate(bounce_page); > mempool_free(bounce_page, fscrypt_bounce_page_pool); > } > > And add some check conditions in f2fs_is_cp_guaranteed() to see whether the > page has been freed before inc_page_count(). I tried to modified: + set_page_private(bounce_page, (unsigned long)0x5566F2F5); But I got two results from different panics. fscrypt_pagecache_page(page):0x000000005566f2f5 and fscrypt_pagecache_page(page):0x0000000000000000 (As shown below) [38417.862874] JY f2fs_is_cp_guaranteed 65 bounced_page:0xfffffffe81cd6760, _private:0xfffffffe824723c0, fscrypt_pagecache_page(page):0x0000000000000000 [38417.921850] JYJY :fffffffe824723c0 is the PAGE [38417.968256] page: refcount:4 mapcount:1 mapping:000000000615ef5b index:0x6c pfn:0x74a0c [38417.998050] memcg:ffffff804c331380 [38418.018203] flags: 0x800000000009029(locked|uptodate|lru|owner_2|private|zone=0) [38418.046079] raw: 0800000000009029 fffffffe82475618 fffffffe82484fc8 ffffff806b25c460 [38418.100286] raw: 000000000000006c 0000000000000009 0000000400000000 ffffff804c331380 [38418.143969] raw: ffffff8064457540 0000000000000000 [38418.162562] page dumped because: JY got the BUG! [38418.199250] page_owner tracks the page as allocated [38418.225840] page last allocated via order 0, migratetype Movable, gfp_mask 0x152c4a(GFP_NOFS|__GFP_HIGHMEM|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_HARDWALL|__GFP_MOVABLE), pid 20039, tgid 19537 (NetworkService), ts 38403893384078, free_ts 38403858760495 [38418.310128] post_alloc_hook+0x1d0/0x1e8 [38418.330509] prep_new_page+0x30/0x150 [38418.358836] get_page_from_freelist+0x11e8/0x127c [38418.375352] __alloc_pages_noprof+0x1b0/0x448 [38418.399171] __folio_alloc_noprof+0x1c/0x64 [38418.430498] page_cache_ra_unbounded+0x1a4/0x36c [38418.440402] page_cache_ra_order+0x358/0x434 [38418.446579] page_cache_async_ra+0x128/0x17c [38418.454399] filemap_fault+0x14c/0x868 [38418.467818] f2fs_filemap_fault+0x34/0xec [38418.475253] __do_fault+0x70/0x110 [38418.484117] do_pte_missing+0x424/0x12f0 [38418.489691] handle_mm_fault+0x4d4/0x818 [38418.499341] do_page_fault+0x210/0x640 [38418.504888] do_translation_fault+0x48/0x11c [38418.510476] do_mem_abort+0x5c/0x108 [38418.515795] page last free pid 64 tgid 64 stack trace: [38418.527744] free_unref_folios+0x944/0xe94 [38418.534456] shrink_folio_list+0x8c8/0x1304 [38418.543434] evict_folios+0x12ec/0x1818 [38418.550869] try_to_shrink_lruvec+0x1fc/0x3c8 [38418.561221] shrink_one+0xa4/0x230 [38418.574348] shrink_node+0xbe0/0xfc4 [38418.599077] balance_pgdat+0x7bc/0xce4 [38418.630024] kswapd+0x298/0x4d8 [38418.650979] kthread+0x118/0x1ac [38418.670266] ret_from_fork+0x10/0x20 -- You may reply to this email to add a comment. You are receiving this mail because: You are watching the assignee of the bug. _______________________________________________ Linux-f2fs-devel mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel
