On Tue, Sep 22, 2020 at 01:36:59PM -0700, Roman Gushchin wrote:
> @@ -422,7 +421,13 @@ static inline void clear_page_mem_cgroup(struct page 
> *page)
>   */
>  static inline struct obj_cgroup **page_obj_cgroups(struct page *page)
>  {
> -     return (struct obj_cgroup **)(page->memcg_data & ~0x1UL);
> +     unsigned long memcg_data = page->memcg_data;
> +
> +     VM_BUG_ON_PAGE(memcg_data && !test_bit(PG_MEMCG_OBJ_CGROUPS,
> +                                            &memcg_data), page);
> +     __clear_bit(PG_MEMCG_OBJ_CGROUPS, &memcg_data);
> +
> +     return (struct obj_cgroup **)memcg_data;

Slab allocations set up page->memcg_data locklessly, right? AFAICS,
the page_objcg lookup functions all need READ_ONCE() loads.

Reply via email to