On 2024/9/3 23:46, kernel test robot wrote:
Hi Hongbo,

kernel test robot noticed the following build errors:

[auto build test ERROR on next-20240902]

url:    
https://github.com/intel-lab-lkp/linux/commits/Hongbo-Li/soc-fsl-qbman-make-use-of-the-helper-function-kthread_run_on_cpu/20240903-060257
base:   next-20240902
patch link:    
https://lore.kernel.org/r/20240902133125.3089560-1-lihongbo22%40huawei.com
patch subject: [PATCH -next] soc/fsl/qbman: make use of the helper function 
kthread_run_on_cpu()
config: x86_64-allyesconfig 
(https://download.01.org/0day-ci/archive/20240903/202409032300.9u9g0c8j-...@intel.com/config)
compiler: clang version 18.1.5 (https://github.com/llvm/llvm-project 
617a15a9eac96088ae5e9134248d8236e34b91b1)
reproduce (this is a W=1 build): 
(https://download.01.org/0day-ci/archive/20240903/202409032300.9u9g0c8j-...@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <l...@intel.com>
| Closes: 
https://lore.kernel.org/oe-kbuild-all/202409032300.9u9g0c8j-...@intel.com/

All errors (new ones prefixed by >>):

drivers/soc/fsl/qbman/qman_test_stash.c:112:27: error: too many arguments to 
function call, expected 4, have 5
      111 |                 struct task_struct *k = kthread_run_on_cpu(bstrap_fn, 
&bstrap,
          |                                         ~~~~~~~~~~~~~~~~~~
      112 |                                                 cpu, "hotpotato%d", 
cpu);
Sorry, I forget remove the last cpu variable.

struct task_struct *k = kthread_run_on_cpu(bstrap_fn, &bstrap, cpu, "hotpotato%u");


Thanks,
Hongbo

          |                                                                     
^~~
    include/linux/kthread.h:73:1: note: 'kthread_run_on_cpu' declared here
       73 | kthread_run_on_cpu(int (*threadfn)(void *data), void *data,
          | ^                  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
       74 |                         unsigned int cpu, const char *namefmt)
          |                         ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    1 error generated.

Kconfig warnings: (for reference only)
    WARNING: unmet direct dependencies detected for OMAP2PLUS_MBOX
    Depends on [n]: MAILBOX [=y] && (ARCH_OMAP2PLUS || ARCH_K3)
    Selected by [y]:
    - TI_K3_M4_REMOTEPROC [=y] && REMOTEPROC [=y] && (ARCH_K3 || COMPILE_TEST 
[=y])


vim +112 drivers/soc/fsl/qbman/qman_test_stash.c

     35 
     36 /*
     37  * Algorithm:
     38  *
     39  * Each cpu will have HP_PER_CPU "handlers" set up, each of which 
incorporates
     40  * an rx/tx pair of FQ objects (both of which are stashed on dequeue). 
The
     41  * organisation of FQIDs is such that the HP_PER_CPU*NUM_CPUS handlers 
will
     42  * shuttle a "hot potato" frame around them such that every forwarding 
action
     43  * moves it from one cpu to another. (The use of more than one handler 
per cpu
     44  * is to allow enough handlers/FQs to truly test the significance of 
caching -
     45  * ie. when cache-expiries are occurring.)
     46  *
     47  * The "hot potato" frame content will be HP_NUM_WORDS*4 bytes in size, 
and the
     48  * first and last words of the frame data will undergo a transformation 
step on
     49  * each forwarding action. To achieve this, each handler will be 
assigned a
     50  * 32-bit "mixer", that is produced using a 32-bit LFSR. When a frame is
     51  * received by a handler, the mixer of the expected sender is XOR'd 
into all
     52  * words of the entire frame, which is then validated against the 
original
     53  * values. Then, before forwarding, the entire frame is XOR'd with the 
mixer of
     54  * the current handler. Apart from validating that the frame is taking 
the
     55  * expected path, this also provides some quasi-realistic overheads to 
each
     56  * forwarding action - dereferencing *all* the frame data, computation, 
and
     57  * conditional branching. There is a "special" handler designated to 
act as the
     58  * instigator of the test by creating an enqueuing the "hot potato" 
frame, and
     59  * to determine when the test has completed by counting HP_LOOPS 
iterations.
     60  *
     61  * Init phases:
     62  *
     63  * 1. prepare each cpu's 'hp_cpu' struct using on_each_cpu(,,1) and 
link them
     64  *    into 'hp_cpu_list'. Specifically, set processor_id, allocate 
HP_PER_CPU
     65  *    handlers and link-list them (but do no other handler setup).
     66  *
     67  * 2. scan over 'hp_cpu_list' HP_PER_CPU times, the first time sets each
     68  *    hp_cpu's 'iterator' to point to its first handler. With each loop,
     69  *    allocate rx/tx FQIDs and mixer values to the hp_cpu's iterator 
handler
     70  *    and advance the iterator for the next loop. This includes a final 
fixup,
     71  *    which connects the last handler to the first (and which is why 
phase 2
     72  *    and 3 are separate).
     73  *
     74  * 3. scan over 'hp_cpu_list' HP_PER_CPU times, the first time sets each
     75  *    hp_cpu's 'iterator' to point to its first handler. With each loop,
     76  *    initialise FQ objects and advance the iterator for the next loop.
     77  *    Moreover, do this initialisation on the cpu it applies to so that 
Rx FQ
     78  *    initialisation targets the correct cpu.
     79  */
     80 
     81 /*
     82  * helper to run something on all cpus (can't use on_each_cpu(), as 
that invokes
     83  * the fn from irq context, which is too restrictive).
     84  */
     85 struct bstrap {
     86         int (*fn)(void);
     87         atomic_t started;
     88 };
     89 static int bstrap_fn(void *bs)
     90 {
     91         struct bstrap *bstrap = bs;
     92         int err;
     93 
     94         atomic_inc(&bstrap->started);
     95         err = bstrap->fn();
     96         if (err)
     97                 return err;
     98         while (!kthread_should_stop())
     99                 msleep(20);
    100         return 0;
    101 }
    102 static int on_all_cpus(int (*fn)(void))
    103 {
    104         int cpu;
    105 
    106         for_each_cpu(cpu, cpu_online_mask) {
    107                 struct bstrap bstrap = {
    108                         .fn = fn,
    109                         .started = ATOMIC_INIT(0)
    110                 };
    111                 struct task_struct *k = kthread_run_on_cpu(bstrap_fn, 
&bstrap,
  > 112                                                      cpu, 
"hotpotato%d", cpu);
    113                 int ret;
    114 
    115                 if (IS_ERR(k))
    116                         return -ENOMEM;
    117                 /*
    118                  * If we call kthread_stop() before the "wake up" has 
had an
    119                  * effect, then the thread may exit with -EINTR without 
ever
    120                  * running the function. So poll until it's started 
before
    121                  * requesting it to stop.
    122                  */
    123                 while (!atomic_read(&bstrap.started))
    124                         msleep(20);
    125                 ret = kthread_stop(k);
    126                 if (ret)
    127                         return ret;
    128         }
    129         return 0;
    130 }
    131 


Reply via email to