On 4/18/19 11:41 AM, Thomas Gleixner wrote:
> Replace the indirection through struct stack_trace by using the storage
> array based interfaces.
> 
> Signed-off-by: Thomas Gleixner <t...@linutronix.de>
> Acked-by: Dmitry Vyukov <dvyu...@google.com>
> Cc: Andrey Ryabinin <aryabi...@virtuozzo.com>
> Cc: Alexander Potapenko <gli...@google.com>
> Cc: kasan-...@googlegroups.com
> Cc: linux...@kvack.org

Acked-by: Andrey Ryabinin <aryabi...@virtuozzo.com>

>  
>  static inline depot_stack_handle_t save_stack(gfp_t flags)
>  {
>       unsigned long entries[KASAN_STACK_DEPTH];
> -     struct stack_trace trace = {
> -             .nr_entries = 0,
> -             .entries = entries,
> -             .max_entries = KASAN_STACK_DEPTH,
> -             .skip = 0
> -     };
> +     unsigned int nr_entries;
>  
> -     save_stack_trace(&trace);
> -     filter_irq_stacks(&trace);
> -
> -     return depot_save_stack(&trace, flags);
> +     nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 0);
> +     nr_entries = filter_irq_stacks(entries, nr_entries);
> +     return stack_depot_save(entries, nr_entries, flags);

Suggestion for further improvement:

stack_trace_save() shouldn't unwind beyond irq entry point so we wouldn't need 
filter_irq_stacks().
Probably all call sites doesn't care about random stack above irq entry point, 
so it doesn't
make sense to spend resources on unwinding non-irq stack from interrupt first 
an filtering out it later.

It would improve performance of stack_trace_save() called from interrupt and 
fix page_owner which feed unfiltered
stack to stack_depot_save(). Random non-irq part kills the benefit of using the 
stack_deopt_save().
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Reply via email to