On 11/12/20 3:03 AM, Hugh Dickins wrote:
On Wed, 11 Nov 2020, Vlastimil Babka wrote:
On 11/5/20 9:55 AM, Alex Shi wrote:
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1542,7 +1542,7 @@ unsigned int reclaim_clean_pages_from_list(struct
> zone *zone,
>*/
> int __isolate_lru_page(struct
On Wed, 11 Nov 2020, Vlastimil Babka wrote:
> On 11/5/20 9:55 AM, Alex Shi wrote:
>
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -1542,7 +1542,7 @@ unsigned int reclaim_clean_pages_from_list(struct
> > zone *zone,
> >*/
> > int __isolate_lru_page(struct page *page, isolate_mode_t
On 11/5/20 9:55 AM, Alex Shi wrote:
Currently lru_lock still guards both lru list and page's lru bit, that's
ok. but if we want to use specific lruvec lock on the page, we need to
pin down the page's lruvec/memcg during locking. Just taking lruvec
lock first may be undermined by the page's memcg
Currently lru_lock still guards both lru list and page's lru bit, that's
ok. but if we want to use specific lruvec lock on the page, we need to
pin down the page's lruvec/memcg during locking. Just taking lruvec
lock first may be undermined by the page's memcg charge/migration. To
fix this
4 matches
Mail list logo