在 8/29/2025 1:39 PM, Dan Carpenter 写道:
Hello Dongsheng Yang,

Commit 1d57628ff95b ("dm-pcache: add persistent cache target in
device-mapper") from Aug 12, 2025 (linux-next), leads to the
following Smatch static checker warning:

        drivers/md/dm-pcache/cache_segment.c:263 cache_seg_gen_increase()
        warn: sleeping in atomic context


Hi Dan,

    Thanx for your report, I will send out the patch after my testing finished.

BTW, can you share the Smatch checking command or script? I want to add

Smatch check in my testing suit if possible, I did not found this kind of problem with simple:

make CHECK="smatch -p=kernel" C=1 M=drivers/md/dm-pcache


Thanx a lot


drivers/md/dm-pcache/cache_segment.c
     257 static void cache_seg_gen_increase(struct pcache_cache_segment 
*cache_seg)
     258 {
     259         spin_lock(&cache_seg->gen_lock);
     260         cache_seg->gen++;
     261         spin_unlock(&cache_seg->gen_lock);
     262
--> 263         cache_seg_ctrl_write(cache_seg);

cache_seg_ctrl_write() takes a mutex so it can't be called with
preemption disabled.

     264 }

There are a few callers which Smatch says have preemption disabled:
The call tree is kind of messy but the point is that there are four
callers which Smatch says disable preemption.

miss_read_end_req() <- disables preempt
-> cache_data_alloc() <- disables preempt
miss_read_end_req() <- disables preempt <duplicate>
cache_write() <- disables preempt
    -> cache_seg_put()
       -> cache_seg_invalidate()
          -> cache_seg_gen_increase()

drivers/md/dm-pcache/cache_req.c
     36  static int cache_data_alloc(struct pcache_cache *cache, struct 
pcache_cache_key *key)
     37  {
     38          struct pcache_cache_data_head *data_head;
     39          struct pcache_cache_pos *head_pos;
     40          struct pcache_cache_segment *cache_seg;
     41          u32 seg_remain;
     42          u32 allocated = 0, to_alloc;
     43          int ret = 0;
     44
     45          preempt_disable();
                 ^^^^^^^^^^^^^^^^^^

     46          data_head = get_data_head(cache);
     47  again:
     48          to_alloc = key->len - allocated;
     49          if (!data_head->head_pos.cache_seg) {
     50                  seg_remain = 0;
     51          } else {
     52                  cache_pos_copy(&key->cache_pos, &data_head->head_pos);
     53                  key->seg_gen = key->cache_pos.cache_seg->gen;
     54
     55                  head_pos = &data_head->head_pos;
     56                  cache_seg = head_pos->cache_seg;
     57                  seg_remain = cache_seg_remain(head_pos);
     58          }
     59
     60          if (seg_remain > to_alloc) {
     61                  /* If remaining space in segment is sufficient for the 
cache key, allocate it. */
     62                  cache_pos_advance(head_pos, to_alloc);
     63                  allocated += to_alloc;
     64                  cache_seg_get(cache_seg);
     65          } else if (seg_remain) {
     66                  /* If remaining space is not enough, allocate the 
remaining space and adjust the cache key length. */
     67                  cache_pos_advance(head_pos, seg_remain);
     68                  key->len = seg_remain;
     69
     70                  /* Get for key: obtain a reference to the cache 
segment for the key. */
     71                  cache_seg_get(cache_seg);
     72                  /* Put for head_pos->cache_seg: release the reference 
for the current head's segment. */
     73                  cache_seg_put(head_pos->cache_seg);
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If we still are holding a second reference, then this warning is a false
positive, but the commen suggests that we might actually free it.

     74                  head_pos->cache_seg = NULL;
     75          } else {
     76                  /* Initialize a new data head if no segment is 
available. */
     77                  ret = cache_data_head_init(cache);
     78                  if (ret)
     79                          goto out;
     80
     81                  goto again;
     82          }
     83
     84  out:
     85          preempt_enable();
     86
     87          return ret;
     88  }

regards,
dan carpenter

Reply via email to