CC: [email protected]
In-Reply-To: <[email protected]>
References: <[email protected]>
TO: Steven Rostedt <[email protected]>
TO: [email protected]
CC: Ingo Molnar <[email protected]>
CC: Andrew Morton <[email protected]>
CC: Linux Memory Management List <[email protected]>

Hi Steven,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on rostedt-trace/for-next]
[also build test WARNING on linux/master hnaz-mm/master linus/master v5.16-rc3 
next-20211129]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    
https://github.com/0day-ci/linux/commits/Steven-Rostedt/tracing-Various-updates/20211130-104342
base:   https://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace.git 
for-next
:::::: branch date: 3 hours ago
:::::: commit date: 3 hours ago
config: x86_64-randconfig-s032-20211128 
(https://download.01.org/0day-ci/archive/20211130/[email protected]/config)
compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
reproduce:
        # apt-get install sparse
        # sparse version: v0.6.4-dirty
        # 
https://github.com/0day-ci/linux/commit/1ac91c8764ae50601cd41dceb620205607ab59f6
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review 
Steven-Rostedt/tracing-Various-updates/20211130-104342
        git checkout 1ac91c8764ae50601cd41dceb620205607ab59f6
        # save the config file to linux build tree
        make W=1 C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' O=build_dir 
ARCH=x86_64 SHELL=/bin/bash kernel/trace/

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <[email protected]>


sparse warnings: (new ones prefixed by >>)
   kernel/trace/trace.c:5710:1: sparse: sparse: trying to concatenate 
9583-character string (8191 bytes max)
   kernel/trace/trace.c:392:28: sparse: sparse: incorrect type in argument 1 
(different address spaces) @@     expected struct trace_export **list @@     
got struct trace_export [noderef] __rcu ** @@
   kernel/trace/trace.c:392:28: sparse:     expected struct trace_export **list
   kernel/trace/trace.c:392:28: sparse:     got struct trace_export [noderef] 
__rcu **
   kernel/trace/trace.c:406:33: sparse: sparse: incorrect type in argument 1 
(different address spaces) @@     expected struct trace_export **list @@     
got struct trace_export [noderef] __rcu ** @@
   kernel/trace/trace.c:406:33: sparse:     expected struct trace_export **list
   kernel/trace/trace.c:406:33: sparse:     got struct trace_export [noderef] 
__rcu **
>> kernel/trace/trace.c:2769:27: sparse: sparse: assignment expression in 
>> conditional
   kernel/trace/trace.c:2843:38: sparse: sparse: incorrect type in argument 1 
(different address spaces) @@     expected struct event_filter *filter @@     
got struct event_filter [noderef] __rcu *filter @@
   kernel/trace/trace.c:2843:38: sparse:     expected struct event_filter 
*filter
   kernel/trace/trace.c:2843:38: sparse:     got struct event_filter [noderef] 
__rcu *filter
   kernel/trace/trace.c:3225:46: sparse: sparse: incorrect type in initializer 
(different address spaces) @@     expected void const [noderef] __percpu 
*__vpp_verify @@     got struct trace_buffer_struct * @@
   kernel/trace/trace.c:3225:46: sparse:     expected void const [noderef] 
__percpu *__vpp_verify
   kernel/trace/trace.c:3225:46: sparse:     got struct trace_buffer_struct *
   kernel/trace/trace.c:3241:9: sparse: sparse: incorrect type in initializer 
(different address spaces) @@     expected void const [noderef] __percpu 
*__vpp_verify @@     got int * @@
   kernel/trace/trace.c:3241:9: sparse:     expected void const [noderef] 
__percpu *__vpp_verify
   kernel/trace/trace.c:3241:9: sparse:     got int *
   kernel/trace/trace.c:3251:17: sparse: sparse: incorrect type in assignment 
(different address spaces) @@     expected struct trace_buffer_struct *buffers 
@@     got struct trace_buffer_struct [noderef] __percpu * @@
   kernel/trace/trace.c:3251:17: sparse:     expected struct 
trace_buffer_struct *buffers
   kernel/trace/trace.c:3251:17: sparse:     got struct trace_buffer_struct 
[noderef] __percpu *
   kernel/trace/trace.c:346:9: sparse: sparse: incompatible types in comparison 
expression (different address spaces):
   kernel/trace/trace.c:346:9: sparse:    struct trace_export [noderef] __rcu *
   kernel/trace/trace.c:346:9: sparse:    struct trace_export *
   kernel/trace/trace.c:361:9: sparse: sparse: incompatible types in comparison 
expression (different address spaces):
   kernel/trace/trace.c:361:9: sparse:    struct trace_export [noderef] __rcu *
   kernel/trace/trace.c:361:9: sparse:    struct trace_export *

vim +2769 kernel/trace/trace.c

2c4a33aba5f9ea Steven Rostedt (Red Hat   2014-03-25  2736) 
ccb469a198cffa Steven Rostedt            2012-08-02  2737  struct 
ring_buffer_event *
13292494379f92 Steven Rostedt (VMware    2019-12-13  2738) 
trace_event_buffer_lock_reserve(struct trace_buffer **current_rb,
7f1d2f8210195c Steven Rostedt (Red Hat   2015-05-05  2739)                      
  struct trace_event_file *trace_file,
ccb469a198cffa Steven Rostedt            2012-08-02  2740                       
  int type, unsigned long len,
36590c50b2d072 Sebastian Andrzej Siewior 2021-01-25  2741                       
  unsigned int trace_ctx)
ccb469a198cffa Steven Rostedt            2012-08-02  2742  {
2c4a33aba5f9ea Steven Rostedt (Red Hat   2014-03-25  2743)      struct 
ring_buffer_event *entry;
b94bc80df64823 Steven Rostedt (VMware    2021-03-16  2744)      struct 
trace_array *tr = trace_file->tr;
0fc1b09ff1ff40 Steven Rostedt (Red Hat   2016-05-03  2745)      int val;
2c4a33aba5f9ea Steven Rostedt (Red Hat   2014-03-25  2746) 
b94bc80df64823 Steven Rostedt (VMware    2021-03-16  2747)      *current_rb = 
tr->array_buffer.buffer;
0fc1b09ff1ff40 Steven Rostedt (Red Hat   2016-05-03  2748) 
b94bc80df64823 Steven Rostedt (VMware    2021-03-16  2749)      if 
(!tr->no_filter_buffering_ref &&
1ac91c8764ae50 Steven Rostedt (VMware    2021-11-29  2750)          
(trace_file->flags & (EVENT_FILE_FL_SOFT_DISABLED | EVENT_FILE_FL_FILTERED))) {
1ac91c8764ae50 Steven Rostedt (VMware    2021-11-29  2751)              
preempt_disable_notrace();
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2752)              /*
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2753)               * 
Filtering is on, so try to use the per cpu buffer first.
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2754)               * This 
buffer will simulate a ring_buffer_event,
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2755)               * 
where the type_len is zero and the array[0] will
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2756)               * hold 
the full length.
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2757)               * (see 
include/linux/ring-buffer.h for details on
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2758)               *  how 
the ring_buffer_event is structured).
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2759)               *
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2760)               * 
Using a temp buffer during filtering and copying it
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2761)               * on a 
matched filter is quicker than writing directly
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2762)               * into 
the ring buffer and then discarding it when
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2763)               * it 
doesn't match. That is because the discard
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2764)               * 
requires several atomic operations to get right.
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2765)               * 
Copying on match and doing nothing on a failed match
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2766)               * is 
still quicker than no copy on match, but having
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2767)               * to 
discard out of the ring buffer on a failed match.
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2768)               */
1ac91c8764ae50 Steven Rostedt (VMware    2021-11-29 @2769)              if 
(entry = __this_cpu_read(trace_buffered_event)) {
faa76a6c289f43 Steven Rostedt (VMware    2021-06-09  2770)                      
int max_len = PAGE_SIZE - struct_size(entry, array, 1);
faa76a6c289f43 Steven Rostedt (VMware    2021-06-09  2771) 
0fc1b09ff1ff40 Steven Rostedt (Red Hat   2016-05-03  2772)                      
val = this_cpu_inc_return(trace_buffered_event_cnt);
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2773) 
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2774)                      
/*
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2775)                      
 * Preemption is disabled, but interrupts and NMIs
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2776)                      
 * can still come in now. If that happens after
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2777)                      
 * the above increment, then it will have to go
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2778)                      
 * back to the old method of allocating the event
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2779)                      
 * on the ring buffer, and if the filter fails, it
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2780)                      
 * will have to call ring_buffer_discard_commit()
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2781)                      
 * to remove it.
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2782)                      
 *
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2783)                      
 * Need to also check the unlikely case that the
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2784)                      
 * length is bigger than the temp buffer size.
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2785)                      
 * If that happens, then the reserve is pretty much
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2786)                      
 * guaranteed to fail, as the ring buffer currently
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2787)                      
 * only allows events less than a page. But that may
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2788)                      
 * change in the future, so let the ring buffer reserve
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2789)                      
 * handle the failure in that case.
8f0901cda14d3b Steven Rostedt (VMware    2021-06-09  2790)                      
 */
faa76a6c289f43 Steven Rostedt (VMware    2021-06-09  2791)                      
if (val == 1 && likely(len <= max_len)) {
36590c50b2d072 Sebastian Andrzej Siewior 2021-01-25  2792                       
        trace_event_setup(entry, type, trace_ctx);
0fc1b09ff1ff40 Steven Rostedt (Red Hat   2016-05-03  2793)                      
        entry->array[0] = len;
1ac91c8764ae50 Steven Rostedt (VMware    2021-11-29  2794)                      
        /* Return with preemption disabled */
0fc1b09ff1ff40 Steven Rostedt (Red Hat   2016-05-03  2795)                      
        return entry;
0fc1b09ff1ff40 Steven Rostedt (Red Hat   2016-05-03  2796)                      
}
0fc1b09ff1ff40 Steven Rostedt (Red Hat   2016-05-03  2797)                      
this_cpu_dec(trace_buffered_event_cnt);
0fc1b09ff1ff40 Steven Rostedt (Red Hat   2016-05-03  2798)              }
1ac91c8764ae50 Steven Rostedt (VMware    2021-11-29  2799)              /* 
__trace_buffer_lock_reserve() disables preemption */
1ac91c8764ae50 Steven Rostedt (VMware    2021-11-29  2800)              
preempt_enable_notrace();
1ac91c8764ae50 Steven Rostedt (VMware    2021-11-29  2801)      }
0fc1b09ff1ff40 Steven Rostedt (Red Hat   2016-05-03  2802) 
36590c50b2d072 Sebastian Andrzej Siewior 2021-01-25  2803       entry = 
__trace_buffer_lock_reserve(*current_rb, type, len,
36590c50b2d072 Sebastian Andrzej Siewior 2021-01-25  2804                       
                    trace_ctx);
2c4a33aba5f9ea Steven Rostedt (Red Hat   2014-03-25  2805)      /*
2c4a33aba5f9ea Steven Rostedt (Red Hat   2014-03-25  2806)       * If tracing 
is off, but we have triggers enabled
2c4a33aba5f9ea Steven Rostedt (Red Hat   2014-03-25  2807)       * we still 
need to look at the event data. Use the temp_buffer
906695e5932463 Qiujun Huang              2020-10-31  2808        * to store the 
trace event for the trigger to use. It's recursive
2c4a33aba5f9ea Steven Rostedt (Red Hat   2014-03-25  2809)       * safe and 
will not be recorded anywhere.
2c4a33aba5f9ea Steven Rostedt (Red Hat   2014-03-25  2810)       */
5d6ad960a71f0b Steven Rostedt (Red Hat   2015-05-13  2811)      if (!entry && 
trace_file->flags & EVENT_FILE_FL_TRIGGER_COND) {
2c4a33aba5f9ea Steven Rostedt (Red Hat   2014-03-25  2812)              
*current_rb = temp_buffer;
36590c50b2d072 Sebastian Andrzej Siewior 2021-01-25  2813               entry = 
__trace_buffer_lock_reserve(*current_rb, type, len,
36590c50b2d072 Sebastian Andrzej Siewior 2021-01-25  2814                       
                            trace_ctx);
ccb469a198cffa Steven Rostedt            2012-08-02  2815       }
2c4a33aba5f9ea Steven Rostedt (Red Hat   2014-03-25  2816)      return entry;
2c4a33aba5f9ea Steven Rostedt (Red Hat   2014-03-25  2817) }
ccb469a198cffa Steven Rostedt            2012-08-02  2818  
EXPORT_SYMBOL_GPL(trace_event_buffer_lock_reserve);
ccb469a198cffa Steven Rostedt            2012-08-02  2819  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/[email protected]
_______________________________________________
kbuild mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to