On Wed, 14 Oct 2020 23:48:05 +0800 Qiujun Huang <[email protected]> wrote:
> On Wed, Oct 14, 2020 at 11:38 PM Steven Rostedt <[email protected]> wrote: > > > > On Wed, 14 Oct 2020 23:16:14 +0800 > > Qiujun Huang <[email protected]> wrote: > > > > > It may be better to check each page is aligned by 4 bytes. The 2 > > > least significant bits of the address will be used as flags. > > > > > > Signed-off-by: Qiujun Huang <[email protected]> > > > --- > > > kernel/trace/ring_buffer.c | 11 +++++++---- > > > 1 file changed, 7 insertions(+), 4 deletions(-) > > > > > > diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c > > > index 93ef0ab6ea20..9dec7d58b177 100644 > > > --- a/kernel/trace/ring_buffer.c > > > +++ b/kernel/trace/ring_buffer.c > > > @@ -1420,7 +1420,8 @@ static int rb_check_pages(struct > > > ring_buffer_per_cpu *cpu_buffer) > > > return 0; > > > } > > > > > > -static int __rb_allocate_pages(long nr_pages, struct list_head *pages, > > > int cpu) > > > +static int __rb_allocate_pages(struct ring_buffer_per_cpu *cpu_buffer, > > > + long nr_pages, struct list_head *pages, int cpu) > > > { > > > struct buffer_page *bpage, *tmp; > > > bool user_thread = current->mm != NULL; > > > @@ -1464,6 +1465,8 @@ static int __rb_allocate_pages(long nr_pages, > > > struct list_head *pages, int cpu) > > > if (!bpage) > > > goto free_pages; > > > > > > + rb_check_bpage(cpu_buffer, bpage); > > > + > > > > > > > Why add it here, and not just add this check to the scan in > > rb_check_pages()? > > rb_head_page_deactivate() in rb_check_pages() will clear the 2 LSB first. > Well, you could just add another scan there, but if you want to do it this way, then remove passing the int cpu to these functions, and use the cpu_buffer->cpu, as keeping the cpu is just redundant. Also, did you see an issue? This check is more of me being paranoid to make sure we don't crash later. I've honestly never seen it trigger. -- Steve

