On Sun, 2020-10-18 at 19:52 -0600, Yu Zhao wrote:
> On Wed, May 27, 2020 at 10:11:19PM +0200, Sebastian Andrzej Siewior wrote:
> > From: Mike Galbraith <umgwanakikb...@gmail.com>
> > 
> > The zcomp driver uses per-CPU compression. The per-CPU data pointer is
> > acquired with get_cpu_ptr() which implicitly disables preemption.
> > It allocates memory inside the preempt disabled region which conflicts
> > with the PREEMPT_RT semantics.
> > 
> > Replace the implicit preemption control with an explicit local lock.
> > This allows RT kernels to substitute it with a real per CPU lock, which
> > serializes the access but keeps the code section preemptible. On non RT
> > kernels this maps to preempt_disable() as before, i.e. no functional
> > change.
> 
> Hi,
> 
> This change seems to have introduced a potential deadlock. Can you
> please take a look?

Hm, looks like I'm getting undeserved credit for uncovering a locking
bug.  In reality, Sebastian was generous with attribution of derivative
work, so he should ge credit.. and it looks like peterz fixed it.

Date: Fri, 16 Oct 2020 14:40:09 +0200
From: Peter Zijlstra <pet...@infradead.org>

---

diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 9100ac36670a..c1e2c2e1cde8 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -1216,10 +1216,11 @@ static void zram_free_page(struct zram *zram, size_t 
index)
 static int __zram_bvec_read(struct zram *zram, struct page *page, u32 index,
                                struct bio *bio, bool partial_io)
 {
-       int ret;
+       struct zcomp_strm *zstrm;
        unsigned long handle;
        unsigned int size;
        void *src, *dst;
+       int ret;
 
        zram_slot_lock(zram, index);
        if (zram_test_flag(zram, index, ZRAM_WB)) {
@@ -1250,6 +1251,9 @@ static int __zram_bvec_read(struct zram *zram, struct 
page *page, u32 index,
 
        size = zram_get_obj_size(zram, index);
 
+       if (size != PAGE_SIZE)
+               zstrm = zcomp_stream_get(zram->comp);
+
        src = zs_map_object(zram->mem_pool, handle, ZS_MM_RO);
        if (size == PAGE_SIZE) {
                dst = kmap_atomic(page);
@@ -1257,8 +1261,6 @@ static int __zram_bvec_read(struct zram *zram, struct 
page *page, u32 index,
                kunmap_atomic(dst);
                ret = 0;
        } else {
-               struct zcomp_strm *zstrm = zcomp_stream_get(zram->comp);
-
                dst = kmap_atomic(page);
                ret = zcomp_decompress(zstrm, src, size, dst);
                kunmap_atomic(dst);


Reply via email to