Hello,

On (04/26/16 17:08), Dan Streetman wrote:
[..]
> -static void __zswap_pool_release(struct rcu_head *head)
> +static void __zswap_pool_release(struct work_struct *work)
>  {
> -     struct zswap_pool *pool = container_of(head, typeof(*pool), rcu_head);
> +     struct zswap_pool *pool = container_of(work, typeof(*pool), work);
> +
> +     synchronize_rcu();
>  
>       /* nobody should have been able to get a kref... */
>       WARN_ON(kref_get_unless_zero(&pool->kref));
> @@ -674,7 +676,9 @@ static void __zswap_pool_empty(struct kref *kref)
>       WARN_ON(pool == zswap_pool_current());
>  
>       list_del_rcu(&pool->list);
> -     call_rcu(&pool->rcu_head, __zswap_pool_release);
> +
> +     INIT_WORK(&pool->work, __zswap_pool_release);
> +     schedule_work(&pool->work);

so in general the patch look good to me.

it's either I didn't have enough coffee yet (which is true) or
_IN THEORY_ it creates a tiny race condition; which is hard (and
unlikely) to hit, but still. and the problem being is
CONFIG_ZSMALLOC_STAT.

zsmalloc stats are exported via debugfs which is getting init
during pool set up in zs_pool_stat_create() -> debugfs_create_dir() 
zsmalloc<ID>.

so, once again, in theory, since zswap has the same <ID>, debugfs
dir will have the same for different pool, so a series of zpool
changes via user space knob

        zsmalloc > zpool
        zbud > zpool
        zsmalloc > zpool

can result in

release zsmalloc0        switch to zbud         switch to zsmalloc
__zswap_pool_release()
        schedule_work()
                                ...
                                                zs_create_pool()
                                                        zs_pool_stat_create()
                                                        <<  zsmalloc0 still 
exists >>

        work is finally scheduled
                zs_destroy_pool()
                        zs_pool_stat_destroy()

        -ss

Reply via email to