Re: [PATCH v4 2/2] arm64: enable context tracking

2014-05-23 Thread Kevin Hilman
Mark Rutland  writes:

> On Fri, May 23, 2014 at 03:51:07PM +0100, Catalin Marinas wrote:
>> On Fri, May 23, 2014 at 01:11:38AM +0100, Kevin Hilman wrote:
>> > Christopher Covington  writes:
>> > > On 05/22/2014 03:27 PM, Larry Bassel wrote:
>> > >> Make calls to ct_user_enter when the kernel is exited
>> > >> and ct_user_exit when the kernel is entered (in el0_da,
>> > >> el0_ia, el0_svc, el0_irq and all of the "error" paths).
>> > >> 
>> > >> These macros expand to function calls which will only work
>> > >> properly if el0_sync and related code has been rearranged
>> > >> (in a previous patch of this series).
>> > >> 
>> > >> The calls to ct_user_exit are made after hw debugging has been
>> > >> enabled (enable_dbg_and_irq).
>> > >> 
>> > >> The call to ct_user_enter is made at the beginning of the
>> > >> kernel_exit macro.
>> > >> 
>> > >> This patch is based on earlier work by Kevin Hilman.
>> > >> Save/restore optimizations were also done by Kevin.
>> > >
>> > >> --- a/arch/arm64/kernel/entry.S
>> > >> +++ b/arch/arm64/kernel/entry.S
>> > >> @@ -30,6 +30,44 @@
>> > >>  #include 
>> > >>  
>> > >>  /*
>> > >> + * Context tracking subsystem.  Used to instrument transitions
>> > >> + * between user and kernel mode.
>> > >> + */
>> > >> +   .macro ct_user_exit, restore = 0
>> > >> +#ifdef CONFIG_CONTEXT_TRACKING
>> > >> +   bl  context_tracking_user_exit
>> > >> +   .if \restore == 1
>> > >> +   /*
>> > >> +* Save/restore needed during syscalls.  Restore syscall 
>> > >> arguments from
>> > >> +* the values already saved on stack during kernel_entry.
>> > >> +*/
>> > >> +   ldp x0, x1, [sp]
>> > >> +   ldp x2, x3, [sp, #S_X2]
>> > >> +   ldp x4, x5, [sp, #S_X4]
>> > >> +   ldp x6, x7, [sp, #S_X6]
>> > >> +   .endif
>> > >> +#endif
>> > >> +   .endm
>> > >> +
>> > >> +   .macro ct_user_enter, save = 0
>> > >> +#ifdef CONFIG_CONTEXT_TRACKING
>> > >> +   .if \save == 1
>> > >> +   /*
>> > >> +* Save/restore only needed on syscall fastpath, which uses
>> > >> +* x0-x2.
>> > >> +*/
>> > >> +   pushx2, x3
>> > >
>> > > Why is x3 saved?
>> > 
>> > I'll respond here since I worked with Larry on the context save/restore
>> > part.
>> > 
>> > [insert rather embarassing disclamer of ignorance of arm64 assembly]
>> > 
>> > Based on my reading of the code, I figured only x0-x2 needed to be
>> > saved.  However, based on some experiments with intentionally clobbering
>> > the registers[1] (as suggested by Mark Rutland) in order to make sure
>> > we're saving/restoring the right things, I discovered x3 was needed too
>> > (I missed updating the comment to mention x0-x3.)
>> > 
>> > Maybe Will/Catalin/Mark R. can shed some light here?
>> 
>> I haven't checked all the code paths but at least for pushing onto the
>> stack we must keep it 16-bytes aligned (architecture requirement).
>
> Sure -- if modifying the stack we need to push/pop pairs of registers to
> keep it aligned. It might be better to use xzr as the dummy value in
> that case to make it clear that the value doesn't really matter.
>
> That said, ct_user_enter is only called in kernel_exit before we restore
> the values off the stack, and the only register I can spot that we need
> to preserve is x0 for the syscall return value. I can't see x1 or x2
> being used any more specially than the rest of the remaining registers.
> Am I missing something,

I don't think you're missing something.  I had thought my experiment in
clobbering registers uncovered that x1-x3 were also in use somewhere,
but in trying to reproduce that now, it's clear only x0 is important.

> or would it be sufficient to do the following?
> push  x0, xzr
> blcontext_tacking_user_enter
> pop   x0, xzr

Yes, this seems to work.

Following Will's suggestion of using a callee-saved register to save x0,
the updated version now looks like this:

.macro ct_user_enter, save = 0
#ifdef CONFIG_CONTEXT_TRACKING
.if \save == 1
/*
 * We only have to save/restore x0 on the fast syscall path where
 * x0 contains the syscall return.
 */
mov x19, x0
.endif
bl  context_tracking_user_enter
.if \save == 1
mov x0, x19
.endif
#endif
.endm


We'll update this as well as address the comments on PATCH 1/2 and send
a v5.

Thanks guys for the review and guidance as I'm wandering a bit in the
dark here in arm64 assembler land.

Cheers,

Kevin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v4 2/2] arm64: enable context tracking

2014-05-23 Thread Will Deacon
On Fri, May 23, 2014 at 04:55:44PM +0100, Mark Rutland wrote:
> On Fri, May 23, 2014 at 03:51:07PM +0100, Catalin Marinas wrote:
> > On Fri, May 23, 2014 at 01:11:38AM +0100, Kevin Hilman wrote:
> > I haven't checked all the code paths but at least for pushing onto the
> > stack we must keep it 16-bytes aligned (architecture requirement).
> 
> Sure -- if modifying the stack we need to push/pop pairs of registers to
> keep it aligned. It might be better to use xzr as the dummy value in
> that case to make it clear that the value doesn't really matter.
> 
> That said, ct_user_enter is only called in kernel_exit before we restore
> the values off the stack, and the only register I can spot that we need
> to preserve is x0 for the syscall return value. I can't see x1 or x2
> being used any more specially than the rest of the remaining registers.
> Am I missing something, or would it be sufficient to do the following?
> 
> push  x0, xzr
> blcontext_tacking_user_enter
> pop   x0, xzr

... and if that works, then why are we using the stack instead of a
callee-saved register?

Will
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v4 2/2] arm64: enable context tracking

2014-05-23 Thread Mark Rutland
On Fri, May 23, 2014 at 03:51:07PM +0100, Catalin Marinas wrote:
> On Fri, May 23, 2014 at 01:11:38AM +0100, Kevin Hilman wrote:
> > Christopher Covington  writes:
> > > On 05/22/2014 03:27 PM, Larry Bassel wrote:
> > >> Make calls to ct_user_enter when the kernel is exited
> > >> and ct_user_exit when the kernel is entered (in el0_da,
> > >> el0_ia, el0_svc, el0_irq and all of the "error" paths).
> > >> 
> > >> These macros expand to function calls which will only work
> > >> properly if el0_sync and related code has been rearranged
> > >> (in a previous patch of this series).
> > >> 
> > >> The calls to ct_user_exit are made after hw debugging has been
> > >> enabled (enable_dbg_and_irq).
> > >> 
> > >> The call to ct_user_enter is made at the beginning of the
> > >> kernel_exit macro.
> > >> 
> > >> This patch is based on earlier work by Kevin Hilman.
> > >> Save/restore optimizations were also done by Kevin.
> > >
> > >> --- a/arch/arm64/kernel/entry.S
> > >> +++ b/arch/arm64/kernel/entry.S
> > >> @@ -30,6 +30,44 @@
> > >>  #include 
> > >>  
> > >>  /*
> > >> + * Context tracking subsystem.  Used to instrument transitions
> > >> + * between user and kernel mode.
> > >> + */
> > >> +.macro ct_user_exit, restore = 0
> > >> +#ifdef CONFIG_CONTEXT_TRACKING
> > >> +bl  context_tracking_user_exit
> > >> +.if \restore == 1
> > >> +/*
> > >> + * Save/restore needed during syscalls.  Restore syscall 
> > >> arguments from
> > >> + * the values already saved on stack during kernel_entry.
> > >> + */
> > >> +ldp x0, x1, [sp]
> > >> +ldp x2, x3, [sp, #S_X2]
> > >> +ldp x4, x5, [sp, #S_X4]
> > >> +ldp x6, x7, [sp, #S_X6]
> > >> +.endif
> > >> +#endif
> > >> +.endm
> > >> +
> > >> +.macro ct_user_enter, save = 0
> > >> +#ifdef CONFIG_CONTEXT_TRACKING
> > >> +.if \save == 1
> > >> +/*
> > >> + * Save/restore only needed on syscall fastpath, which uses
> > >> + * x0-x2.
> > >> + */
> > >> +pushx2, x3
> > >
> > > Why is x3 saved?
> > 
> > I'll respond here since I worked with Larry on the context save/restore
> > part.
> > 
> > [insert rather embarassing disclamer of ignorance of arm64 assembly]
> > 
> > Based on my reading of the code, I figured only x0-x2 needed to be
> > saved.  However, based on some experiments with intentionally clobbering
> > the registers[1] (as suggested by Mark Rutland) in order to make sure
> > we're saving/restoring the right things, I discovered x3 was needed too
> > (I missed updating the comment to mention x0-x3.)
> > 
> > Maybe Will/Catalin/Mark R. can shed some light here?
> 
> I haven't checked all the code paths but at least for pushing onto the
> stack we must keep it 16-bytes aligned (architecture requirement).

Sure -- if modifying the stack we need to push/pop pairs of registers to
keep it aligned. It might be better to use xzr as the dummy value in
that case to make it clear that the value doesn't really matter.

That said, ct_user_enter is only called in kernel_exit before we restore
the values off the stack, and the only register I can spot that we need
to preserve is x0 for the syscall return value. I can't see x1 or x2
being used any more specially than the rest of the remaining registers.
Am I missing something, or would it be sufficient to do the following?

pushx0, xzr
bl  context_tacking_user_enter
pop x0, xzr

Cheers,
Mark.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v4 2/2] arm64: enable context tracking

2014-05-23 Thread Catalin Marinas
On Fri, May 23, 2014 at 01:11:38AM +0100, Kevin Hilman wrote:
> Christopher Covington  writes:
> > On 05/22/2014 03:27 PM, Larry Bassel wrote:
> >> Make calls to ct_user_enter when the kernel is exited
> >> and ct_user_exit when the kernel is entered (in el0_da,
> >> el0_ia, el0_svc, el0_irq and all of the "error" paths).
> >> 
> >> These macros expand to function calls which will only work
> >> properly if el0_sync and related code has been rearranged
> >> (in a previous patch of this series).
> >> 
> >> The calls to ct_user_exit are made after hw debugging has been
> >> enabled (enable_dbg_and_irq).
> >> 
> >> The call to ct_user_enter is made at the beginning of the
> >> kernel_exit macro.
> >> 
> >> This patch is based on earlier work by Kevin Hilman.
> >> Save/restore optimizations were also done by Kevin.
> >
> >> --- a/arch/arm64/kernel/entry.S
> >> +++ b/arch/arm64/kernel/entry.S
> >> @@ -30,6 +30,44 @@
> >>  #include 
> >>  
> >>  /*
> >> + * Context tracking subsystem.  Used to instrument transitions
> >> + * between user and kernel mode.
> >> + */
> >> +  .macro ct_user_exit, restore = 0
> >> +#ifdef CONFIG_CONTEXT_TRACKING
> >> +  bl  context_tracking_user_exit
> >> +  .if \restore == 1
> >> +  /*
> >> +   * Save/restore needed during syscalls.  Restore syscall arguments from
> >> +   * the values already saved on stack during kernel_entry.
> >> +   */
> >> +  ldp x0, x1, [sp]
> >> +  ldp x2, x3, [sp, #S_X2]
> >> +  ldp x4, x5, [sp, #S_X4]
> >> +  ldp x6, x7, [sp, #S_X6]
> >> +  .endif
> >> +#endif
> >> +  .endm
> >> +
> >> +  .macro ct_user_enter, save = 0
> >> +#ifdef CONFIG_CONTEXT_TRACKING
> >> +  .if \save == 1
> >> +  /*
> >> +   * Save/restore only needed on syscall fastpath, which uses
> >> +   * x0-x2.
> >> +   */
> >> +  pushx2, x3
> >
> > Why is x3 saved?
> 
> I'll respond here since I worked with Larry on the context save/restore
> part.
> 
> [insert rather embarassing disclamer of ignorance of arm64 assembly]
> 
> Based on my reading of the code, I figured only x0-x2 needed to be
> saved.  However, based on some experiments with intentionally clobbering
> the registers[1] (as suggested by Mark Rutland) in order to make sure
> we're saving/restoring the right things, I discovered x3 was needed too
> (I missed updating the comment to mention x0-x3.)
> 
> Maybe Will/Catalin/Mark R. can shed some light here?

I haven't checked all the code paths but at least for pushing onto the
stack we must keep it 16-bytes aligned (architecture requirement).

-- 
Catalin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v4 2/2] arm64: enable context tracking

2014-05-23 Thread Catalin Marinas
On Fri, May 23, 2014 at 01:11:38AM +0100, Kevin Hilman wrote:
 Christopher Covington c...@codeaurora.org writes:
  On 05/22/2014 03:27 PM, Larry Bassel wrote:
  Make calls to ct_user_enter when the kernel is exited
  and ct_user_exit when the kernel is entered (in el0_da,
  el0_ia, el0_svc, el0_irq and all of the error paths).
  
  These macros expand to function calls which will only work
  properly if el0_sync and related code has been rearranged
  (in a previous patch of this series).
  
  The calls to ct_user_exit are made after hw debugging has been
  enabled (enable_dbg_and_irq).
  
  The call to ct_user_enter is made at the beginning of the
  kernel_exit macro.
  
  This patch is based on earlier work by Kevin Hilman.
  Save/restore optimizations were also done by Kevin.
 
  --- a/arch/arm64/kernel/entry.S
  +++ b/arch/arm64/kernel/entry.S
  @@ -30,6 +30,44 @@
   #include asm/unistd32.h
   
   /*
  + * Context tracking subsystem.  Used to instrument transitions
  + * between user and kernel mode.
  + */
  +  .macro ct_user_exit, restore = 0
  +#ifdef CONFIG_CONTEXT_TRACKING
  +  bl  context_tracking_user_exit
  +  .if \restore == 1
  +  /*
  +   * Save/restore needed during syscalls.  Restore syscall arguments from
  +   * the values already saved on stack during kernel_entry.
  +   */
  +  ldp x0, x1, [sp]
  +  ldp x2, x3, [sp, #S_X2]
  +  ldp x4, x5, [sp, #S_X4]
  +  ldp x6, x7, [sp, #S_X6]
  +  .endif
  +#endif
  +  .endm
  +
  +  .macro ct_user_enter, save = 0
  +#ifdef CONFIG_CONTEXT_TRACKING
  +  .if \save == 1
  +  /*
  +   * Save/restore only needed on syscall fastpath, which uses
  +   * x0-x2.
  +   */
  +  pushx2, x3
 
  Why is x3 saved?
 
 I'll respond here since I worked with Larry on the context save/restore
 part.
 
 [insert rather embarassing disclamer of ignorance of arm64 assembly]
 
 Based on my reading of the code, I figured only x0-x2 needed to be
 saved.  However, based on some experiments with intentionally clobbering
 the registers[1] (as suggested by Mark Rutland) in order to make sure
 we're saving/restoring the right things, I discovered x3 was needed too
 (I missed updating the comment to mention x0-x3.)
 
 Maybe Will/Catalin/Mark R. can shed some light here?

I haven't checked all the code paths but at least for pushing onto the
stack we must keep it 16-bytes aligned (architecture requirement).

-- 
Catalin
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v4 2/2] arm64: enable context tracking

2014-05-23 Thread Mark Rutland
On Fri, May 23, 2014 at 03:51:07PM +0100, Catalin Marinas wrote:
 On Fri, May 23, 2014 at 01:11:38AM +0100, Kevin Hilman wrote:
  Christopher Covington c...@codeaurora.org writes:
   On 05/22/2014 03:27 PM, Larry Bassel wrote:
   Make calls to ct_user_enter when the kernel is exited
   and ct_user_exit when the kernel is entered (in el0_da,
   el0_ia, el0_svc, el0_irq and all of the error paths).
   
   These macros expand to function calls which will only work
   properly if el0_sync and related code has been rearranged
   (in a previous patch of this series).
   
   The calls to ct_user_exit are made after hw debugging has been
   enabled (enable_dbg_and_irq).
   
   The call to ct_user_enter is made at the beginning of the
   kernel_exit macro.
   
   This patch is based on earlier work by Kevin Hilman.
   Save/restore optimizations were also done by Kevin.
  
   --- a/arch/arm64/kernel/entry.S
   +++ b/arch/arm64/kernel/entry.S
   @@ -30,6 +30,44 @@
#include asm/unistd32.h

/*
   + * Context tracking subsystem.  Used to instrument transitions
   + * between user and kernel mode.
   + */
   +.macro ct_user_exit, restore = 0
   +#ifdef CONFIG_CONTEXT_TRACKING
   +bl  context_tracking_user_exit
   +.if \restore == 1
   +/*
   + * Save/restore needed during syscalls.  Restore syscall 
   arguments from
   + * the values already saved on stack during kernel_entry.
   + */
   +ldp x0, x1, [sp]
   +ldp x2, x3, [sp, #S_X2]
   +ldp x4, x5, [sp, #S_X4]
   +ldp x6, x7, [sp, #S_X6]
   +.endif
   +#endif
   +.endm
   +
   +.macro ct_user_enter, save = 0
   +#ifdef CONFIG_CONTEXT_TRACKING
   +.if \save == 1
   +/*
   + * Save/restore only needed on syscall fastpath, which uses
   + * x0-x2.
   + */
   +pushx2, x3
  
   Why is x3 saved?
  
  I'll respond here since I worked with Larry on the context save/restore
  part.
  
  [insert rather embarassing disclamer of ignorance of arm64 assembly]
  
  Based on my reading of the code, I figured only x0-x2 needed to be
  saved.  However, based on some experiments with intentionally clobbering
  the registers[1] (as suggested by Mark Rutland) in order to make sure
  we're saving/restoring the right things, I discovered x3 was needed too
  (I missed updating the comment to mention x0-x3.)
  
  Maybe Will/Catalin/Mark R. can shed some light here?
 
 I haven't checked all the code paths but at least for pushing onto the
 stack we must keep it 16-bytes aligned (architecture requirement).

Sure -- if modifying the stack we need to push/pop pairs of registers to
keep it aligned. It might be better to use xzr as the dummy value in
that case to make it clear that the value doesn't really matter.

That said, ct_user_enter is only called in kernel_exit before we restore
the values off the stack, and the only register I can spot that we need
to preserve is x0 for the syscall return value. I can't see x1 or x2
being used any more specially than the rest of the remaining registers.
Am I missing something, or would it be sufficient to do the following?

pushx0, xzr
bl  context_tacking_user_enter
pop x0, xzr

Cheers,
Mark.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v4 2/2] arm64: enable context tracking

2014-05-23 Thread Will Deacon
On Fri, May 23, 2014 at 04:55:44PM +0100, Mark Rutland wrote:
 On Fri, May 23, 2014 at 03:51:07PM +0100, Catalin Marinas wrote:
  On Fri, May 23, 2014 at 01:11:38AM +0100, Kevin Hilman wrote:
  I haven't checked all the code paths but at least for pushing onto the
  stack we must keep it 16-bytes aligned (architecture requirement).
 
 Sure -- if modifying the stack we need to push/pop pairs of registers to
 keep it aligned. It might be better to use xzr as the dummy value in
 that case to make it clear that the value doesn't really matter.
 
 That said, ct_user_enter is only called in kernel_exit before we restore
 the values off the stack, and the only register I can spot that we need
 to preserve is x0 for the syscall return value. I can't see x1 or x2
 being used any more specially than the rest of the remaining registers.
 Am I missing something, or would it be sufficient to do the following?
 
 push  x0, xzr
 blcontext_tacking_user_enter
 pop   x0, xzr

... and if that works, then why are we using the stack instead of a
callee-saved register?

Will
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v4 2/2] arm64: enable context tracking

2014-05-23 Thread Kevin Hilman
Mark Rutland mark.rutl...@arm.com writes:

 On Fri, May 23, 2014 at 03:51:07PM +0100, Catalin Marinas wrote:
 On Fri, May 23, 2014 at 01:11:38AM +0100, Kevin Hilman wrote:
  Christopher Covington c...@codeaurora.org writes:
   On 05/22/2014 03:27 PM, Larry Bassel wrote:
   Make calls to ct_user_enter when the kernel is exited
   and ct_user_exit when the kernel is entered (in el0_da,
   el0_ia, el0_svc, el0_irq and all of the error paths).
   
   These macros expand to function calls which will only work
   properly if el0_sync and related code has been rearranged
   (in a previous patch of this series).
   
   The calls to ct_user_exit are made after hw debugging has been
   enabled (enable_dbg_and_irq).
   
   The call to ct_user_enter is made at the beginning of the
   kernel_exit macro.
   
   This patch is based on earlier work by Kevin Hilman.
   Save/restore optimizations were also done by Kevin.
  
   --- a/arch/arm64/kernel/entry.S
   +++ b/arch/arm64/kernel/entry.S
   @@ -30,6 +30,44 @@
#include asm/unistd32.h

/*
   + * Context tracking subsystem.  Used to instrument transitions
   + * between user and kernel mode.
   + */
   +   .macro ct_user_exit, restore = 0
   +#ifdef CONFIG_CONTEXT_TRACKING
   +   bl  context_tracking_user_exit
   +   .if \restore == 1
   +   /*
   +* Save/restore needed during syscalls.  Restore syscall 
   arguments from
   +* the values already saved on stack during kernel_entry.
   +*/
   +   ldp x0, x1, [sp]
   +   ldp x2, x3, [sp, #S_X2]
   +   ldp x4, x5, [sp, #S_X4]
   +   ldp x6, x7, [sp, #S_X6]
   +   .endif
   +#endif
   +   .endm
   +
   +   .macro ct_user_enter, save = 0
   +#ifdef CONFIG_CONTEXT_TRACKING
   +   .if \save == 1
   +   /*
   +* Save/restore only needed on syscall fastpath, which uses
   +* x0-x2.
   +*/
   +   pushx2, x3
  
   Why is x3 saved?
  
  I'll respond here since I worked with Larry on the context save/restore
  part.
  
  [insert rather embarassing disclamer of ignorance of arm64 assembly]
  
  Based on my reading of the code, I figured only x0-x2 needed to be
  saved.  However, based on some experiments with intentionally clobbering
  the registers[1] (as suggested by Mark Rutland) in order to make sure
  we're saving/restoring the right things, I discovered x3 was needed too
  (I missed updating the comment to mention x0-x3.)
  
  Maybe Will/Catalin/Mark R. can shed some light here?
 
 I haven't checked all the code paths but at least for pushing onto the
 stack we must keep it 16-bytes aligned (architecture requirement).

 Sure -- if modifying the stack we need to push/pop pairs of registers to
 keep it aligned. It might be better to use xzr as the dummy value in
 that case to make it clear that the value doesn't really matter.

 That said, ct_user_enter is only called in kernel_exit before we restore
 the values off the stack, and the only register I can spot that we need
 to preserve is x0 for the syscall return value. I can't see x1 or x2
 being used any more specially than the rest of the remaining registers.
 Am I missing something,

I don't think you're missing something.  I had thought my experiment in
clobbering registers uncovered that x1-x3 were also in use somewhere,
but in trying to reproduce that now, it's clear only x0 is important.

 or would it be sufficient to do the following?
 push  x0, xzr
 blcontext_tacking_user_enter
 pop   x0, xzr

Yes, this seems to work.

Following Will's suggestion of using a callee-saved register to save x0,
the updated version now looks like this:

.macro ct_user_enter, save = 0
#ifdef CONFIG_CONTEXT_TRACKING
.if \save == 1
/*
 * We only have to save/restore x0 on the fast syscall path where
 * x0 contains the syscall return.
 */
mov x19, x0
.endif
bl  context_tracking_user_enter
.if \save == 1
mov x0, x19
.endif
#endif
.endm


We'll update this as well as address the comments on PATCH 1/2 and send
a v5.

Thanks guys for the review and guidance as I'm wandering a bit in the
dark here in arm64 assembler land.

Cheers,

Kevin
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v4 2/2] arm64: enable context tracking

2014-05-22 Thread Kevin Hilman
+Mark Rutland

Christopher Covington  writes:

> Hi Larry,
>
> On 05/22/2014 03:27 PM, Larry Bassel wrote:
>> Make calls to ct_user_enter when the kernel is exited
>> and ct_user_exit when the kernel is entered (in el0_da,
>> el0_ia, el0_svc, el0_irq and all of the "error" paths).
>> 
>> These macros expand to function calls which will only work
>> properly if el0_sync and related code has been rearranged
>> (in a previous patch of this series).
>> 
>> The calls to ct_user_exit are made after hw debugging has been
>> enabled (enable_dbg_and_irq).
>> 
>> The call to ct_user_enter is made at the beginning of the
>> kernel_exit macro.
>> 
>> This patch is based on earlier work by Kevin Hilman.
>> Save/restore optimizations were also done by Kevin.
>
>> --- a/arch/arm64/kernel/entry.S
>> +++ b/arch/arm64/kernel/entry.S
>> @@ -30,6 +30,44 @@
>>  #include 
>>  
>>  /*
>> + * Context tracking subsystem.  Used to instrument transitions
>> + * between user and kernel mode.
>> + */
>> +.macro ct_user_exit, restore = 0
>> +#ifdef CONFIG_CONTEXT_TRACKING
>> +bl  context_tracking_user_exit
>> +.if \restore == 1
>> +/*
>> + * Save/restore needed during syscalls.  Restore syscall arguments from
>> + * the values already saved on stack during kernel_entry.
>> + */
>> +ldp x0, x1, [sp]
>> +ldp x2, x3, [sp, #S_X2]
>> +ldp x4, x5, [sp, #S_X4]
>> +ldp x6, x7, [sp, #S_X6]
>> +.endif
>> +#endif
>> +.endm
>> +
>> +.macro ct_user_enter, save = 0
>> +#ifdef CONFIG_CONTEXT_TRACKING
>> +.if \save == 1
>> +/*
>> + * Save/restore only needed on syscall fastpath, which uses
>> + * x0-x2.
>> + */
>> +pushx2, x3
>
> Why is x3 saved?

I'll respond here since I worked with Larry on the context save/restore
part.

[insert rather embarassing disclamer of ignorance of arm64 assembly]

Based on my reading of the code, I figured only x0-x2 needed to be
saved.  However, based on some experiments with intentionally clobbering
the registers[1] (as suggested by Mark Rutland) in order to make sure
we're saving/restoring the right things, I discovered x3 was needed too
(I missed updating the comment to mention x0-x3.)

Maybe Will/Catalin/Mark R. can shed some light here?

Kevin

[1]
>From 8a8702b4d597d08def1368beae5db2f4a8aa Mon Sep 17 00:00:00 2001
From: Kevin Hilman 
Date: Fri, 9 May 2014 13:37:43 -0700
Subject: [PATCH] KJH: test: clobber regs

---
 arch/arm64/kernel/entry.S | 38 ++
 1 file changed, 38 insertions(+)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 520da4c02ece..232f0200e88d 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -36,6 +36,25 @@
.macro ct_user_exit, restore = 0
 #ifdef CONFIG_CONTEXT_TRACKING
bl  context_tracking_user_exit
+   movzx0, #0xff, lsl #48
+   movzx1, #0xff, lsl #48
+   movzx2, #0xff, lsl #48
+   movzx3, #0xff, lsl #48
+   movzx4, #0xff, lsl #48
+   movzx5, #0xff, lsl #48
+   movzx6, #0xff, lsl #48
+   movzx7, #0xff, lsl #48
+   movzx8, #0xff, lsl #48
+   movzx9, #0xff, lsl #48
+   movzx10, #0xff, lsl #48
+   movzx11, #0xff, lsl #48
+   movzx12, #0xff, lsl #48
+   movzx13, #0xff, lsl #48
+   movzx14, #0xff, lsl #48
+   movzx15, #0xff, lsl #48
+   movzx16, #0xff, lsl #48
+   movzx17, #0xff, lsl #48
+   movzx18, #0xff, lsl #48
.if \restore == 1
/*
 * Save/restore needed during syscalls.  Restore syscall arguments from
@@ -60,6 +79,25 @@
pushx0, x1
.endif
bl  context_tracking_user_enter
+   movzx0, #0xff, lsl #48
+   movzx1, #0xff, lsl #48
+   movzx2, #0xff, lsl #48
+   movzx3, #0xff, lsl #48
+   movzx4, #0xff, lsl #48
+   movzx5, #0xff, lsl #48
+   movzx6, #0xff, lsl #48
+   movzx7, #0xff, lsl #48
+   movzx8, #0xff, lsl #48
+   movzx9, #0xff, lsl #48
+   movzx10, #0xff, lsl #48
+   movzx11, #0xff, lsl #48
+   movzx12, #0xff, lsl #48
+   movzx13, #0xff, lsl #48
+   movzx14, #0xff, lsl #48
+   movzx15, #0xff, lsl #48
+   movzx16, #0xff, lsl #48
+   movzx17, #0xff, lsl #48
+   movzx18, #0xff, lsl #48
.if \save == 1
pop x0, x1
pop x2, x3
-- 
1.9.2

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v4 2/2] arm64: enable context tracking

2014-05-22 Thread Christopher Covington
Hi Larry,

On 05/22/2014 03:27 PM, Larry Bassel wrote:
> Make calls to ct_user_enter when the kernel is exited
> and ct_user_exit when the kernel is entered (in el0_da,
> el0_ia, el0_svc, el0_irq and all of the "error" paths).
> 
> These macros expand to function calls which will only work
> properly if el0_sync and related code has been rearranged
> (in a previous patch of this series).
> 
> The calls to ct_user_exit are made after hw debugging has been
> enabled (enable_dbg_and_irq).
> 
> The call to ct_user_enter is made at the beginning of the
> kernel_exit macro.
> 
> This patch is based on earlier work by Kevin Hilman.
> Save/restore optimizations were also done by Kevin.

> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -30,6 +30,44 @@
>  #include 
>  
>  /*
> + * Context tracking subsystem.  Used to instrument transitions
> + * between user and kernel mode.
> + */
> + .macro ct_user_exit, restore = 0
> +#ifdef CONFIG_CONTEXT_TRACKING
> + bl  context_tracking_user_exit
> + .if \restore == 1
> + /*
> +  * Save/restore needed during syscalls.  Restore syscall arguments from
> +  * the values already saved on stack during kernel_entry.
> +  */
> + ldp x0, x1, [sp]
> + ldp x2, x3, [sp, #S_X2]
> + ldp x4, x5, [sp, #S_X4]
> + ldp x6, x7, [sp, #S_X6]
> + .endif
> +#endif
> + .endm
> +
> + .macro ct_user_enter, save = 0
> +#ifdef CONFIG_CONTEXT_TRACKING
> + .if \save == 1
> + /*
> +  * Save/restore only needed on syscall fastpath, which uses
> +  * x0-x2.
> +  */
> + pushx2, x3

Why is x3 saved?

> + pushx0, x1
> + .endif
> + bl  context_tracking_user_enter
> + .if \save == 1
> + pop x0, x1
> + pop x2, x3
> + .endif
> +#endif
> + .endm

Thanks,
Christopher

-- 
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by the Linux Foundation.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v4 2/2] arm64: enable context tracking

2014-05-22 Thread Larry Bassel
Make calls to ct_user_enter when the kernel is exited
and ct_user_exit when the kernel is entered (in el0_da,
el0_ia, el0_svc, el0_irq and all of the "error" paths).

These macros expand to function calls which will only work
properly if el0_sync and related code has been rearranged
(in a previous patch of this series).

The calls to ct_user_exit are made after hw debugging has been
enabled (enable_dbg_and_irq).

The call to ct_user_enter is made at the beginning of the
kernel_exit macro.

This patch is based on earlier work by Kevin Hilman.
Save/restore optimizations were also done by Kevin.

Signed-off-by: Kevin Hilman 
Signed-off-by: Larry Bassel 
---
 arch/arm64/Kconfig   |  1 +
 arch/arm64/include/asm/thread_info.h |  1 +
 arch/arm64/kernel/entry.S| 48 
 3 files changed, 50 insertions(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index e759af5..ef18ae5 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -55,6 +55,7 @@ config ARM64
select RTC_LIB
select SPARSE_IRQ
select SYSCTL_EXCEPTION_TRACE
+   select HAVE_CONTEXT_TRACKING
help
  ARM 64-bit (AArch64) Linux support.
 
diff --git a/arch/arm64/include/asm/thread_info.h 
b/arch/arm64/include/asm/thread_info.h
index 720e70b..301ea6a 100644
--- a/arch/arm64/include/asm/thread_info.h
+++ b/arch/arm64/include/asm/thread_info.h
@@ -108,6 +108,7 @@ static inline struct thread_info *current_thread_info(void)
 #define TIF_SINGLESTEP 21
 #define TIF_32BIT  22  /* 32bit process */
 #define TIF_SWITCH_MM  23  /* deferred switch_mm */
+#define TIF_NOHZ24
 
 #define _TIF_SIGPENDING(1 << TIF_SIGPENDING)
 #define _TIF_NEED_RESCHED  (1 << TIF_NEED_RESCHED)
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 20b336e..520da4c 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -30,6 +30,44 @@
 #include 
 
 /*
+ * Context tracking subsystem.  Used to instrument transitions
+ * between user and kernel mode.
+ */
+   .macro ct_user_exit, restore = 0
+#ifdef CONFIG_CONTEXT_TRACKING
+   bl  context_tracking_user_exit
+   .if \restore == 1
+   /*
+* Save/restore needed during syscalls.  Restore syscall arguments from
+* the values already saved on stack during kernel_entry.
+*/
+   ldp x0, x1, [sp]
+   ldp x2, x3, [sp, #S_X2]
+   ldp x4, x5, [sp, #S_X4]
+   ldp x6, x7, [sp, #S_X6]
+   .endif
+#endif
+   .endm
+
+   .macro ct_user_enter, save = 0
+#ifdef CONFIG_CONTEXT_TRACKING
+   .if \save == 1
+   /*
+* Save/restore only needed on syscall fastpath, which uses
+* x0-x2.
+*/
+   pushx2, x3
+   pushx0, x1
+   .endif
+   bl  context_tracking_user_enter
+   .if \save == 1
+   pop x0, x1
+   pop x2, x3
+   .endif
+#endif
+   .endm
+
+/*
  * Bad Abort numbers
  *-
  */
@@ -91,6 +129,7 @@
.macro  kernel_exit, el, ret = 0
ldp x21, x22, [sp, #S_PC]   // load ELR, SPSR
.if \el == 0
+   ct_user_enter \ret
ldr x23, [sp, #S_SP]// load return stack pointer
.endif
.if \ret
@@ -318,6 +357,7 @@ el1_irq:
bl  trace_hardirqs_off
 #endif
 
+   ct_user_exit
irq_handler
 
 #ifdef CONFIG_PREEMPT
@@ -427,6 +467,7 @@ el0_da:
mrs x26, far_el1
// enable interrupts before calling the main handler
enable_dbg_and_irq
+   ct_user_exit
mov x0, x26
bic x0, x0, #(0xff << 56)
mov x1, x25
@@ -440,6 +481,7 @@ el0_ia:
mrs x26, far_el1
// enable interrupts before calling the main handler
enable_dbg_and_irq
+   ct_user_exit
mov x0, x26
orr x1, x25, #1 << 24   // use reserved ISS bit for 
instruction aborts
mov x2, sp
@@ -450,6 +492,7 @@ el0_fpsimd_acc:
 * Floating Point or Advanced SIMD access
 */
enable_dbg
+   ct_user_exit
mov x0, x25
mov x1, sp
adr lr, ret_to_user
@@ -459,6 +502,7 @@ el0_fpsimd_exc:
 * Floating Point or Advanced SIMD exception
 */
enable_dbg
+   ct_user_exit
mov x0, x25
mov x1, sp
adr lr, ret_to_user
@@ -481,6 +525,7 @@ el0_undef:
 */
// enable interrupts before calling the main handler
enable_dbg_and_irq
+   ct_user_exit
mov x0, sp
adr lr, ret_to_user
b   do_undefinstr
@@ -495,10 +540,12 @@ el0_dbg:
mov x2, sp
bl  do_debug_exception
enable_dbg
+   ct_user_exit
mov x0, x26
b   ret_to_user
 el0_inv:
enable_dbg
+   ct_user_exit
mov x0, sp

[PATCH v4 2/2] arm64: enable context tracking

2014-05-22 Thread Larry Bassel
Make calls to ct_user_enter when the kernel is exited
and ct_user_exit when the kernel is entered (in el0_da,
el0_ia, el0_svc, el0_irq and all of the error paths).

These macros expand to function calls which will only work
properly if el0_sync and related code has been rearranged
(in a previous patch of this series).

The calls to ct_user_exit are made after hw debugging has been
enabled (enable_dbg_and_irq).

The call to ct_user_enter is made at the beginning of the
kernel_exit macro.

This patch is based on earlier work by Kevin Hilman.
Save/restore optimizations were also done by Kevin.

Signed-off-by: Kevin Hilman khil...@linaro.org
Signed-off-by: Larry Bassel larry.bas...@linaro.org
---
 arch/arm64/Kconfig   |  1 +
 arch/arm64/include/asm/thread_info.h |  1 +
 arch/arm64/kernel/entry.S| 48 
 3 files changed, 50 insertions(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index e759af5..ef18ae5 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -55,6 +55,7 @@ config ARM64
select RTC_LIB
select SPARSE_IRQ
select SYSCTL_EXCEPTION_TRACE
+   select HAVE_CONTEXT_TRACKING
help
  ARM 64-bit (AArch64) Linux support.
 
diff --git a/arch/arm64/include/asm/thread_info.h 
b/arch/arm64/include/asm/thread_info.h
index 720e70b..301ea6a 100644
--- a/arch/arm64/include/asm/thread_info.h
+++ b/arch/arm64/include/asm/thread_info.h
@@ -108,6 +108,7 @@ static inline struct thread_info *current_thread_info(void)
 #define TIF_SINGLESTEP 21
 #define TIF_32BIT  22  /* 32bit process */
 #define TIF_SWITCH_MM  23  /* deferred switch_mm */
+#define TIF_NOHZ24
 
 #define _TIF_SIGPENDING(1  TIF_SIGPENDING)
 #define _TIF_NEED_RESCHED  (1  TIF_NEED_RESCHED)
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 20b336e..520da4c 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -30,6 +30,44 @@
 #include asm/unistd32.h
 
 /*
+ * Context tracking subsystem.  Used to instrument transitions
+ * between user and kernel mode.
+ */
+   .macro ct_user_exit, restore = 0
+#ifdef CONFIG_CONTEXT_TRACKING
+   bl  context_tracking_user_exit
+   .if \restore == 1
+   /*
+* Save/restore needed during syscalls.  Restore syscall arguments from
+* the values already saved on stack during kernel_entry.
+*/
+   ldp x0, x1, [sp]
+   ldp x2, x3, [sp, #S_X2]
+   ldp x4, x5, [sp, #S_X4]
+   ldp x6, x7, [sp, #S_X6]
+   .endif
+#endif
+   .endm
+
+   .macro ct_user_enter, save = 0
+#ifdef CONFIG_CONTEXT_TRACKING
+   .if \save == 1
+   /*
+* Save/restore only needed on syscall fastpath, which uses
+* x0-x2.
+*/
+   pushx2, x3
+   pushx0, x1
+   .endif
+   bl  context_tracking_user_enter
+   .if \save == 1
+   pop x0, x1
+   pop x2, x3
+   .endif
+#endif
+   .endm
+
+/*
  * Bad Abort numbers
  *-
  */
@@ -91,6 +129,7 @@
.macro  kernel_exit, el, ret = 0
ldp x21, x22, [sp, #S_PC]   // load ELR, SPSR
.if \el == 0
+   ct_user_enter \ret
ldr x23, [sp, #S_SP]// load return stack pointer
.endif
.if \ret
@@ -318,6 +357,7 @@ el1_irq:
bl  trace_hardirqs_off
 #endif
 
+   ct_user_exit
irq_handler
 
 #ifdef CONFIG_PREEMPT
@@ -427,6 +467,7 @@ el0_da:
mrs x26, far_el1
// enable interrupts before calling the main handler
enable_dbg_and_irq
+   ct_user_exit
mov x0, x26
bic x0, x0, #(0xff  56)
mov x1, x25
@@ -440,6 +481,7 @@ el0_ia:
mrs x26, far_el1
// enable interrupts before calling the main handler
enable_dbg_and_irq
+   ct_user_exit
mov x0, x26
orr x1, x25, #1  24   // use reserved ISS bit for 
instruction aborts
mov x2, sp
@@ -450,6 +492,7 @@ el0_fpsimd_acc:
 * Floating Point or Advanced SIMD access
 */
enable_dbg
+   ct_user_exit
mov x0, x25
mov x1, sp
adr lr, ret_to_user
@@ -459,6 +502,7 @@ el0_fpsimd_exc:
 * Floating Point or Advanced SIMD exception
 */
enable_dbg
+   ct_user_exit
mov x0, x25
mov x1, sp
adr lr, ret_to_user
@@ -481,6 +525,7 @@ el0_undef:
 */
// enable interrupts before calling the main handler
enable_dbg_and_irq
+   ct_user_exit
mov x0, sp
adr lr, ret_to_user
b   do_undefinstr
@@ -495,10 +540,12 @@ el0_dbg:
mov x2, sp
bl  do_debug_exception
enable_dbg
+   ct_user_exit
mov x0, x26
b   ret_to_user
 el0_inv:
enable_dbg
+  

Re: [PATCH v4 2/2] arm64: enable context tracking

2014-05-22 Thread Christopher Covington
Hi Larry,

On 05/22/2014 03:27 PM, Larry Bassel wrote:
 Make calls to ct_user_enter when the kernel is exited
 and ct_user_exit when the kernel is entered (in el0_da,
 el0_ia, el0_svc, el0_irq and all of the error paths).
 
 These macros expand to function calls which will only work
 properly if el0_sync and related code has been rearranged
 (in a previous patch of this series).
 
 The calls to ct_user_exit are made after hw debugging has been
 enabled (enable_dbg_and_irq).
 
 The call to ct_user_enter is made at the beginning of the
 kernel_exit macro.
 
 This patch is based on earlier work by Kevin Hilman.
 Save/restore optimizations were also done by Kevin.

 --- a/arch/arm64/kernel/entry.S
 +++ b/arch/arm64/kernel/entry.S
 @@ -30,6 +30,44 @@
  #include asm/unistd32.h
  
  /*
 + * Context tracking subsystem.  Used to instrument transitions
 + * between user and kernel mode.
 + */
 + .macro ct_user_exit, restore = 0
 +#ifdef CONFIG_CONTEXT_TRACKING
 + bl  context_tracking_user_exit
 + .if \restore == 1
 + /*
 +  * Save/restore needed during syscalls.  Restore syscall arguments from
 +  * the values already saved on stack during kernel_entry.
 +  */
 + ldp x0, x1, [sp]
 + ldp x2, x3, [sp, #S_X2]
 + ldp x4, x5, [sp, #S_X4]
 + ldp x6, x7, [sp, #S_X6]
 + .endif
 +#endif
 + .endm
 +
 + .macro ct_user_enter, save = 0
 +#ifdef CONFIG_CONTEXT_TRACKING
 + .if \save == 1
 + /*
 +  * Save/restore only needed on syscall fastpath, which uses
 +  * x0-x2.
 +  */
 + pushx2, x3

Why is x3 saved?

 + pushx0, x1
 + .endif
 + bl  context_tracking_user_enter
 + .if \save == 1
 + pop x0, x1
 + pop x2, x3
 + .endif
 +#endif
 + .endm

Thanks,
Christopher

-- 
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by the Linux Foundation.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v4 2/2] arm64: enable context tracking

2014-05-22 Thread Kevin Hilman
+Mark Rutland

Christopher Covington c...@codeaurora.org writes:

 Hi Larry,

 On 05/22/2014 03:27 PM, Larry Bassel wrote:
 Make calls to ct_user_enter when the kernel is exited
 and ct_user_exit when the kernel is entered (in el0_da,
 el0_ia, el0_svc, el0_irq and all of the error paths).
 
 These macros expand to function calls which will only work
 properly if el0_sync and related code has been rearranged
 (in a previous patch of this series).
 
 The calls to ct_user_exit are made after hw debugging has been
 enabled (enable_dbg_and_irq).
 
 The call to ct_user_enter is made at the beginning of the
 kernel_exit macro.
 
 This patch is based on earlier work by Kevin Hilman.
 Save/restore optimizations were also done by Kevin.

 --- a/arch/arm64/kernel/entry.S
 +++ b/arch/arm64/kernel/entry.S
 @@ -30,6 +30,44 @@
  #include asm/unistd32.h
  
  /*
 + * Context tracking subsystem.  Used to instrument transitions
 + * between user and kernel mode.
 + */
 +.macro ct_user_exit, restore = 0
 +#ifdef CONFIG_CONTEXT_TRACKING
 +bl  context_tracking_user_exit
 +.if \restore == 1
 +/*
 + * Save/restore needed during syscalls.  Restore syscall arguments from
 + * the values already saved on stack during kernel_entry.
 + */
 +ldp x0, x1, [sp]
 +ldp x2, x3, [sp, #S_X2]
 +ldp x4, x5, [sp, #S_X4]
 +ldp x6, x7, [sp, #S_X6]
 +.endif
 +#endif
 +.endm
 +
 +.macro ct_user_enter, save = 0
 +#ifdef CONFIG_CONTEXT_TRACKING
 +.if \save == 1
 +/*
 + * Save/restore only needed on syscall fastpath, which uses
 + * x0-x2.
 + */
 +pushx2, x3

 Why is x3 saved?

I'll respond here since I worked with Larry on the context save/restore
part.

[insert rather embarassing disclamer of ignorance of arm64 assembly]

Based on my reading of the code, I figured only x0-x2 needed to be
saved.  However, based on some experiments with intentionally clobbering
the registers[1] (as suggested by Mark Rutland) in order to make sure
we're saving/restoring the right things, I discovered x3 was needed too
(I missed updating the comment to mention x0-x3.)

Maybe Will/Catalin/Mark R. can shed some light here?

Kevin

[1]
From 8a8702b4d597d08def1368beae5db2f4a8aa Mon Sep 17 00:00:00 2001
From: Kevin Hilman khil...@linaro.org
Date: Fri, 9 May 2014 13:37:43 -0700
Subject: [PATCH] KJH: test: clobber regs

---
 arch/arm64/kernel/entry.S | 38 ++
 1 file changed, 38 insertions(+)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 520da4c02ece..232f0200e88d 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -36,6 +36,25 @@
.macro ct_user_exit, restore = 0
 #ifdef CONFIG_CONTEXT_TRACKING
bl  context_tracking_user_exit
+   movzx0, #0xff, lsl #48
+   movzx1, #0xff, lsl #48
+   movzx2, #0xff, lsl #48
+   movzx3, #0xff, lsl #48
+   movzx4, #0xff, lsl #48
+   movzx5, #0xff, lsl #48
+   movzx6, #0xff, lsl #48
+   movzx7, #0xff, lsl #48
+   movzx8, #0xff, lsl #48
+   movzx9, #0xff, lsl #48
+   movzx10, #0xff, lsl #48
+   movzx11, #0xff, lsl #48
+   movzx12, #0xff, lsl #48
+   movzx13, #0xff, lsl #48
+   movzx14, #0xff, lsl #48
+   movzx15, #0xff, lsl #48
+   movzx16, #0xff, lsl #48
+   movzx17, #0xff, lsl #48
+   movzx18, #0xff, lsl #48
.if \restore == 1
/*
 * Save/restore needed during syscalls.  Restore syscall arguments from
@@ -60,6 +79,25 @@
pushx0, x1
.endif
bl  context_tracking_user_enter
+   movzx0, #0xff, lsl #48
+   movzx1, #0xff, lsl #48
+   movzx2, #0xff, lsl #48
+   movzx3, #0xff, lsl #48
+   movzx4, #0xff, lsl #48
+   movzx5, #0xff, lsl #48
+   movzx6, #0xff, lsl #48
+   movzx7, #0xff, lsl #48
+   movzx8, #0xff, lsl #48
+   movzx9, #0xff, lsl #48
+   movzx10, #0xff, lsl #48
+   movzx11, #0xff, lsl #48
+   movzx12, #0xff, lsl #48
+   movzx13, #0xff, lsl #48
+   movzx14, #0xff, lsl #48
+   movzx15, #0xff, lsl #48
+   movzx16, #0xff, lsl #48
+   movzx17, #0xff, lsl #48
+   movzx18, #0xff, lsl #48
.if \save == 1
pop x0, x1
pop x2, x3
-- 
1.9.2

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/