On Sun, Oct 13, 2024 at 12:56 AM Shakeel Butt <shakeel.b...@linux.dev> wrote:
>
> On Thu, Oct 10, 2024 at 01:56:42PM GMT, Andrii Nakryiko wrote:
> > To increase mm->mm_lock_seq robustness, switch it from int to long, so
> > that it's a 64-bit counter on 64-bit systems and we can stop worrying
> > about it wrapping around in just ~4 billion iterations. Same goes for
> > VMA's matching vm_lock_seq, which is derived from mm_lock_seq.

vm_lock_seq does not need to be long but for consistency I guess that
makes sense. While at it, can you please change these seq counters to
be unsigned?
Also, did you check with pahole if the vm_area_struct layout change
pushes some members into a difference cacheline or creates new gaps?

> >
> > I didn't use __u64 outright to keep 32-bit architectures unaffected, but
> > if it seems important enough, I have nothing against using __u64.
> >
> > Suggested-by: Jann Horn <ja...@google.com>
> > Signed-off-by: Andrii Nakryiko <and...@kernel.org>
>
> Reviewed-by: Shakeel Butt <shakeel.b...@linux.dev>

Reply via email to