Re: Enable arm64 PAN feature

2020-08-18 Thread Mark Kettenis
> From: Dale Rahn 
> Date: Mon, 17 Aug 2020 18:33:29 -0500
> 
> could we check that there is not an ESR value that indicates PAN violation
> instead of using 'instruction recognition'?

Doesn't exist unfortunately.  You get a protection fault, but you get
the same protection fault if you try to write to a read-only page for
example.

> Seems that it would be more reliable.
> Thanks
> Dale
> 
> On Mon, Aug 17, 2020 at 1:30 AM Jonathan Gray  wrote:
> 
>  On Sat, Aug 15, 2020 at 01:54:34PM +0200, Mark Kettenis wrote:
>  > > Date: Sat, 15 Aug 2020 20:21:09 +1000
>  > > From: Jonathan Gray 
>  > > 
>  > > On Fri, Aug 14, 2020 at 11:06:59PM +0200, Mark Kettenis wrote:
>  > > > > Date: Fri, 14 Aug 2020 14:40:23 +0200 (CEST)
>  > > > > From: Mark Kettenis 
>  > > > > 
>  > > > > I suppose a way to test this properly is to pick a system call
>  and
>  > > > > replace a copyin() with a direct access?  That will succeed
>  without
>  > > > > PAN but should fail with PAN enabled right?
>  > > > 
>  > > > So that does indeed work.  However, the result is a hard hang.  So
>  > > > here as an additional diff that makes sure we panic instead.  The
>  idea
>  > > > is that all user-space access from the kernel should be done by the
>  > > > special unprivileged load/store instructions.
>  > > 
>  > > Would disabling PSTATE.PAN in copyin/copyout along the lines of how
>  > > stac/clac is done for SMAP avoid having to test the instruction type
>  > > entirely?
>  > 
>  > No.  The problem is that we meed to catch permission faults caused by
>  > PAN.  But since the faulting address may be valid in the sense that
>  > UVM has a mapping for them that allows the requested access.  In that
>  > case we end up looping since uvm_fault() returns 0 and we retry the
>  > faulting instruction.
>  > 
>  > > Is it faulting on valid copyin/copyout the way you have it at the
>  > > moment?  I don't quite follow what is going on.
>  > 
>  > The copyin/copyout functions use the unpriviliged load/store
>  > instructions (LDTR/STTR) which bypass PAN.  But we may still fault
>  > because the memory hasn't been faulted in or because the memory has
>  > been marked read-only for CoW or for MOD/REF emulation.  And some of
>  > those faults manifest themselves as permission faults as well.
>  > 
>  > Currently (without the diff quoted below) those faults will be handled
>  > just fine.  The diff below needs to make sure this continues to be the
>  > case.  The easiest way to do that is to check the instruction.
>  > 
>  > Note that this check is in the "slow path".  In most cases the address
>  > touched by copyin/copyout will already be in the page tables since
>  > userland touched it already.
>  > 
>  > Does that clarfify things?
> 
>  Yes, thanks.  I'm fine with both of these diffs going in but still think
>  you should change the mask.
> 
>  > 
>  > 
>  > > > Index: arch/arm64/arm64/trap.c
>  > > > ===
>  > > > RCS file: /cvs/src/sys/arch/arm64/arm64/trap.c,v
>  > > > retrieving revision 1.27
>  > > > diff -u -p -r1.27 trap.c
>  > > > --- arch/arm64/arm64/trap.c   6 Jan 2020 12:37:30 -  
>  1.27
>  > > > +++ arch/arm64/arm64/trap.c   14 Aug 2020 21:05:54 -
>  > > > @@ -65,6 +65,14 @@ void do_el0_error(struct trapframe *);
>  > > >  
>  > > >  void dumpregs(struct trapframe*);
>  > > >  
>  > > > +/* Check whether we're executing an unprivileged load/store
>  instruction. */
>  > > > +static inline int
>  > > > +is_unpriv_ldst(uint64_t elr)
>  > > > +{
>  > > > + uint32_t insn = *(uint32_t *)elr;
>  > > > + return ((insn & 0x3f200c00) == 0x38000800);
>  > > 
>  > > The value of op1 (bit 26) is not used according to the table in the
>  Arm
>  > > ARM.  The mask would be better as 0x3b200c00
>  > > 
>  > > 
>  > > > +}
>  > > > +
>  > > >  static void
>  > > >  data_abort(struct trapframe *frame, uint64_t esr, uint64_t far,
>  > > >  int lower, int exe)
>  > > > @@ -104,8 +112,18 @@ data_abort(struct trapframe *frame, uint
>  > > >   /* The top bit tells us which range to use */
>  > > >   if ((far >> 63) == 1)
>  > > >   map = kernel_map;
>  > > > - else
>  > > > + else {
>  > > > + /*
>  > > > +  * Only allow user-space access using
>  > > > +  * unprivileged load/store instructions.
>  > > > +  */
>  > > > + if (!is_unpriv_ldst(frame->tf_elr)) {
>  > > > + panic("attempt to access user address"
>  > > > +   " 0x%llx from EL1", far);
>  > > > + }
>  > > > +
>  > > >   map = >p_vmspace->vm_map;
>  > > > + }
>  > > >   }
>  > > >  
>  > > >   if (exe)
>  > > > 
>  > > > 
>  > > 
>  > 
>  > 



Re: Enable arm64 PAN feature

2020-08-17 Thread Dale Rahn
could we check that there is not an ESR value that indicates PAN violation
instead of using 'instruction recognition'?

Seems that it would be more reliable.
Thanks
Dale

On Mon, Aug 17, 2020 at 1:30 AM Jonathan Gray  wrote:

> On Sat, Aug 15, 2020 at 01:54:34PM +0200, Mark Kettenis wrote:
> > > Date: Sat, 15 Aug 2020 20:21:09 +1000
> > > From: Jonathan Gray 
> > >
> > > On Fri, Aug 14, 2020 at 11:06:59PM +0200, Mark Kettenis wrote:
> > > > > Date: Fri, 14 Aug 2020 14:40:23 +0200 (CEST)
> > > > > From: Mark Kettenis 
> > > > >
> > > > > I suppose a way to test this properly is to pick a system call and
> > > > > replace a copyin() with a direct access?  That will succeed without
> > > > > PAN but should fail with PAN enabled right?
> > > >
> > > > So that does indeed work.  However, the result is a hard hang.  So
> > > > here as an additional diff that makes sure we panic instead.  The
> idea
> > > > is that all user-space access from the kernel should be done by the
> > > > special unprivileged load/store instructions.
> > >
> > > Would disabling PSTATE.PAN in copyin/copyout along the lines of how
> > > stac/clac is done for SMAP avoid having to test the instruction type
> > > entirely?
> >
> > No.  The problem is that we meed to catch permission faults caused by
> > PAN.  But since the faulting address may be valid in the sense that
> > UVM has a mapping for them that allows the requested access.  In that
> > case we end up looping since uvm_fault() returns 0 and we retry the
> > faulting instruction.
> >
> > > Is it faulting on valid copyin/copyout the way you have it at the
> > > moment?  I don't quite follow what is going on.
> >
> > The copyin/copyout functions use the unpriviliged load/store
> > instructions (LDTR/STTR) which bypass PAN.  But we may still fault
> > because the memory hasn't been faulted in or because the memory has
> > been marked read-only for CoW or for MOD/REF emulation.  And some of
> > those faults manifest themselves as permission faults as well.
> >
> > Currently (without the diff quoted below) those faults will be handled
> > just fine.  The diff below needs to make sure this continues to be the
> > case.  The easiest way to do that is to check the instruction.
> >
> > Note that this check is in the "slow path".  In most cases the address
> > touched by copyin/copyout will already be in the page tables since
> > userland touched it already.
> >
> > Does that clarfify things?
>
> Yes, thanks.  I'm fine with both of these diffs going in but still think
> you should change the mask.
>
> >
> >
> > > > Index: arch/arm64/arm64/trap.c
> > > > ===
> > > > RCS file: /cvs/src/sys/arch/arm64/arm64/trap.c,v
> > > > retrieving revision 1.27
> > > > diff -u -p -r1.27 trap.c
> > > > --- arch/arm64/arm64/trap.c   6 Jan 2020 12:37:30 -
>  1.27
> > > > +++ arch/arm64/arm64/trap.c   14 Aug 2020 21:05:54 -
> > > > @@ -65,6 +65,14 @@ void do_el0_error(struct trapframe *);
> > > >
> > > >  void dumpregs(struct trapframe*);
> > > >
> > > > +/* Check whether we're executing an unprivileged load/store
> instruction. */
> > > > +static inline int
> > > > +is_unpriv_ldst(uint64_t elr)
> > > > +{
> > > > + uint32_t insn = *(uint32_t *)elr;
> > > > + return ((insn & 0x3f200c00) == 0x38000800);
> > >
> > > The value of op1 (bit 26) is not used according to the table in the Arm
> > > ARM.  The mask would be better as 0x3b200c00
> > >
> > >
> > > > +}
> > > > +
> > > >  static void
> > > >  data_abort(struct trapframe *frame, uint64_t esr, uint64_t far,
> > > >  int lower, int exe)
> > > > @@ -104,8 +112,18 @@ data_abort(struct trapframe *frame, uint
> > > >   /* The top bit tells us which range to use */
> > > >   if ((far >> 63) == 1)
> > > >   map = kernel_map;
> > > > - else
> > > > + else {
> > > > + /*
> > > > +  * Only allow user-space access using
> > > > +  * unprivileged load/store instructions.
> > > > +  */
> > > > + if (!is_unpriv_ldst(frame->tf_elr)) {
> > > > + panic("attempt to access user address"
> > > > +   " 0x%llx from EL1", far);
> > > > + }
> > > > +
> > > >   map = >p_vmspace->vm_map;
> > > > + }
> > > >   }
> > > >
> > > >   if (exe)
> > > >
> > > >
> > >
> >
> >
>
>


Re: Enable arm64 PAN feature

2020-08-17 Thread Jonathan Gray
On Sat, Aug 15, 2020 at 01:54:34PM +0200, Mark Kettenis wrote:
> > Date: Sat, 15 Aug 2020 20:21:09 +1000
> > From: Jonathan Gray 
> > 
> > On Fri, Aug 14, 2020 at 11:06:59PM +0200, Mark Kettenis wrote:
> > > > Date: Fri, 14 Aug 2020 14:40:23 +0200 (CEST)
> > > > From: Mark Kettenis 
> > > > 
> > > > I suppose a way to test this properly is to pick a system call and
> > > > replace a copyin() with a direct access?  That will succeed without
> > > > PAN but should fail with PAN enabled right?
> > > 
> > > So that does indeed work.  However, the result is a hard hang.  So
> > > here as an additional diff that makes sure we panic instead.  The idea
> > > is that all user-space access from the kernel should be done by the
> > > special unprivileged load/store instructions.
> > 
> > Would disabling PSTATE.PAN in copyin/copyout along the lines of how
> > stac/clac is done for SMAP avoid having to test the instruction type
> > entirely?
> 
> No.  The problem is that we meed to catch permission faults caused by
> PAN.  But since the faulting address may be valid in the sense that
> UVM has a mapping for them that allows the requested access.  In that
> case we end up looping since uvm_fault() returns 0 and we retry the
> faulting instruction.
> 
> > Is it faulting on valid copyin/copyout the way you have it at the
> > moment?  I don't quite follow what is going on.
> 
> The copyin/copyout functions use the unpriviliged load/store
> instructions (LDTR/STTR) which bypass PAN.  But we may still fault
> because the memory hasn't been faulted in or because the memory has
> been marked read-only for CoW or for MOD/REF emulation.  And some of
> those faults manifest themselves as permission faults as well.
> 
> Currently (without the diff quoted below) those faults will be handled
> just fine.  The diff below needs to make sure this continues to be the
> case.  The easiest way to do that is to check the instruction.
> 
> Note that this check is in the "slow path".  In most cases the address
> touched by copyin/copyout will already be in the page tables since
> userland touched it already.
> 
> Does that clarfify things?

Yes, thanks.  I'm fine with both of these diffs going in but still think
you should change the mask.

> 
> 
> > > Index: arch/arm64/arm64/trap.c
> > > ===
> > > RCS file: /cvs/src/sys/arch/arm64/arm64/trap.c,v
> > > retrieving revision 1.27
> > > diff -u -p -r1.27 trap.c
> > > --- arch/arm64/arm64/trap.c   6 Jan 2020 12:37:30 -   1.27
> > > +++ arch/arm64/arm64/trap.c   14 Aug 2020 21:05:54 -
> > > @@ -65,6 +65,14 @@ void do_el0_error(struct trapframe *);
> > >  
> > >  void dumpregs(struct trapframe*);
> > >  
> > > +/* Check whether we're executing an unprivileged load/store instruction. 
> > > */
> > > +static inline int
> > > +is_unpriv_ldst(uint64_t elr)
> > > +{
> > > + uint32_t insn = *(uint32_t *)elr;
> > > + return ((insn & 0x3f200c00) == 0x38000800);
> > 
> > The value of op1 (bit 26) is not used according to the table in the Arm
> > ARM.  The mask would be better as 0x3b200c00
> > 
> > 
> > > +}
> > > +
> > >  static void
> > >  data_abort(struct trapframe *frame, uint64_t esr, uint64_t far,
> > >  int lower, int exe)
> > > @@ -104,8 +112,18 @@ data_abort(struct trapframe *frame, uint
> > >   /* The top bit tells us which range to use */
> > >   if ((far >> 63) == 1)
> > >   map = kernel_map;
> > > - else
> > > + else {
> > > + /*
> > > +  * Only allow user-space access using
> > > +  * unprivileged load/store instructions.
> > > +  */
> > > + if (!is_unpriv_ldst(frame->tf_elr)) {
> > > + panic("attempt to access user address"
> > > +   " 0x%llx from EL1", far);
> > > + }
> > > +
> > >   map = >p_vmspace->vm_map;
> > > + }
> > >   }
> > >  
> > >   if (exe)
> > > 
> > > 
> > 
> 
> 



Re: Enable arm64 PAN feature

2020-08-15 Thread Dale Rahn
Enabling PAN is a great idea. I have only skimmed this diff at this point,
but it looks reasonable, with the additional check to catch the PAN
violation in the data abort handler.

Dale

On Sat, Aug 15, 2020 at 6:56 AM Mark Kettenis 
wrote:

> > Date: Sat, 15 Aug 2020 20:21:09 +1000
> > From: Jonathan Gray 
> >
> > On Fri, Aug 14, 2020 at 11:06:59PM +0200, Mark Kettenis wrote:
> > > > Date: Fri, 14 Aug 2020 14:40:23 +0200 (CEST)
> > > > From: Mark Kettenis 
> > > >
> > > > I suppose a way to test this properly is to pick a system call and
> > > > replace a copyin() with a direct access?  That will succeed without
> > > > PAN but should fail with PAN enabled right?
> > >
> > > So that does indeed work.  However, the result is a hard hang.  So
> > > here as an additional diff that makes sure we panic instead.  The idea
> > > is that all user-space access from the kernel should be done by the
> > > special unprivileged load/store instructions.
> >
> > Would disabling PSTATE.PAN in copyin/copyout along the lines of how
> > stac/clac is done for SMAP avoid having to test the instruction type
> > entirely?
>
> No.  The problem is that we meed to catch permission faults caused by
> PAN.  But since the faulting address may be valid in the sense that
> UVM has a mapping for them that allows the requested access.  In that
> case we end up looping since uvm_fault() returns 0 and we retry the
> faulting instruction.
>
> > Is it faulting on valid copyin/copyout the way you have it at the
> > moment?  I don't quite follow what is going on.
>
> The copyin/copyout functions use the unpriviliged load/store
> instructions (LDTR/STTR) which bypass PAN.  But we may still fault
> because the memory hasn't been faulted in or because the memory has
> been marked read-only for CoW or for MOD/REF emulation.  And some of
> those faults manifest themselves as permission faults as well.
>
> Currently (without the diff quoted below) those faults will be handled
> just fine.  The diff below needs to make sure this continues to be the
> case.  The easiest way to do that is to check the instruction.
>
> Note that this check is in the "slow path".  In most cases the address
> touched by copyin/copyout will already be in the page tables since
> userland touched it already.
>
> Does that clarfify things?
>
>
> > > Index: arch/arm64/arm64/trap.c
> > > ===
> > > RCS file: /cvs/src/sys/arch/arm64/arm64/trap.c,v
> > > retrieving revision 1.27
> > > diff -u -p -r1.27 trap.c
> > > --- arch/arm64/arm64/trap.c 6 Jan 2020 12:37:30 -   1.27
> > > +++ arch/arm64/arm64/trap.c 14 Aug 2020 21:05:54 -
> > > @@ -65,6 +65,14 @@ void do_el0_error(struct trapframe *);
> > >
> > >  void dumpregs(struct trapframe*);
> > >
> > > +/* Check whether we're executing an unprivileged load/store
> instruction. */
> > > +static inline int
> > > +is_unpriv_ldst(uint64_t elr)
> > > +{
> > > +   uint32_t insn = *(uint32_t *)elr;
> > > +   return ((insn & 0x3f200c00) == 0x38000800);
> >
> > The value of op1 (bit 26) is not used according to the table in the Arm
> > ARM.  The mask would be better as 0x3b200c00
> >
> >
> > > +}
> > > +
> > >  static void
> > >  data_abort(struct trapframe *frame, uint64_t esr, uint64_t far,
> > >  int lower, int exe)
> > > @@ -104,8 +112,18 @@ data_abort(struct trapframe *frame, uint
> > > /* The top bit tells us which range to use */
> > > if ((far >> 63) == 1)
> > > map = kernel_map;
> > > -   else
> > > +   else {
> > > +   /*
> > > +* Only allow user-space access using
> > > +* unprivileged load/store instructions.
> > > +*/
> > > +   if (!is_unpriv_ldst(frame->tf_elr)) {
> > > +   panic("attempt to access user address"
> > > + " 0x%llx from EL1", far);
> > > +   }
> > > +
> > > map = >p_vmspace->vm_map;
> > > +   }
> > > }
> > >
> > > if (exe)
> > >
> > >
> >
>
>


Re: Enable arm64 PAN feature

2020-08-15 Thread Mark Kettenis
> Date: Sat, 15 Aug 2020 20:21:09 +1000
> From: Jonathan Gray 
> 
> On Fri, Aug 14, 2020 at 11:06:59PM +0200, Mark Kettenis wrote:
> > > Date: Fri, 14 Aug 2020 14:40:23 +0200 (CEST)
> > > From: Mark Kettenis 
> > > 
> > > I suppose a way to test this properly is to pick a system call and
> > > replace a copyin() with a direct access?  That will succeed without
> > > PAN but should fail with PAN enabled right?
> > 
> > So that does indeed work.  However, the result is a hard hang.  So
> > here as an additional diff that makes sure we panic instead.  The idea
> > is that all user-space access from the kernel should be done by the
> > special unprivileged load/store instructions.
> 
> Would disabling PSTATE.PAN in copyin/copyout along the lines of how
> stac/clac is done for SMAP avoid having to test the instruction type
> entirely?

No.  The problem is that we meed to catch permission faults caused by
PAN.  But since the faulting address may be valid in the sense that
UVM has a mapping for them that allows the requested access.  In that
case we end up looping since uvm_fault() returns 0 and we retry the
faulting instruction.

> Is it faulting on valid copyin/copyout the way you have it at the
> moment?  I don't quite follow what is going on.

The copyin/copyout functions use the unpriviliged load/store
instructions (LDTR/STTR) which bypass PAN.  But we may still fault
because the memory hasn't been faulted in or because the memory has
been marked read-only for CoW or for MOD/REF emulation.  And some of
those faults manifest themselves as permission faults as well.

Currently (without the diff quoted below) those faults will be handled
just fine.  The diff below needs to make sure this continues to be the
case.  The easiest way to do that is to check the instruction.

Note that this check is in the "slow path".  In most cases the address
touched by copyin/copyout will already be in the page tables since
userland touched it already.

Does that clarfify things?


> > Index: arch/arm64/arm64/trap.c
> > ===
> > RCS file: /cvs/src/sys/arch/arm64/arm64/trap.c,v
> > retrieving revision 1.27
> > diff -u -p -r1.27 trap.c
> > --- arch/arm64/arm64/trap.c 6 Jan 2020 12:37:30 -   1.27
> > +++ arch/arm64/arm64/trap.c 14 Aug 2020 21:05:54 -
> > @@ -65,6 +65,14 @@ void do_el0_error(struct trapframe *);
> >  
> >  void dumpregs(struct trapframe*);
> >  
> > +/* Check whether we're executing an unprivileged load/store instruction. */
> > +static inline int
> > +is_unpriv_ldst(uint64_t elr)
> > +{
> > +   uint32_t insn = *(uint32_t *)elr;
> > +   return ((insn & 0x3f200c00) == 0x38000800);
> 
> The value of op1 (bit 26) is not used according to the table in the Arm
> ARM.  The mask would be better as 0x3b200c00
> 
> 
> > +}
> > +
> >  static void
> >  data_abort(struct trapframe *frame, uint64_t esr, uint64_t far,
> >  int lower, int exe)
> > @@ -104,8 +112,18 @@ data_abort(struct trapframe *frame, uint
> > /* The top bit tells us which range to use */
> > if ((far >> 63) == 1)
> > map = kernel_map;
> > -   else
> > +   else {
> > +   /*
> > +* Only allow user-space access using
> > +* unprivileged load/store instructions.
> > +*/
> > +   if (!is_unpriv_ldst(frame->tf_elr)) {
> > +   panic("attempt to access user address"
> > + " 0x%llx from EL1", far);
> > +   }
> > +
> > map = >p_vmspace->vm_map;
> > +   }
> > }
> >  
> > if (exe)
> > 
> > 
> 



Re: Enable arm64 PAN feature

2020-08-15 Thread Jonathan Gray
On Fri, Aug 14, 2020 at 11:06:59PM +0200, Mark Kettenis wrote:
> > Date: Fri, 14 Aug 2020 14:40:23 +0200 (CEST)
> > From: Mark Kettenis 
> > 
> > I suppose a way to test this properly is to pick a system call and
> > replace a copyin() with a direct access?  That will succeed without
> > PAN but should fail with PAN enabled right?
> 
> So that does indeed work.  However, the result is a hard hang.  So
> here as an additional diff that makes sure we panic instead.  The idea
> is that all user-space access from the kernel should be done by the
> special unprivileged load/store instructions.

Would disabling PSTATE.PAN in copyin/copyout along the lines of how
stac/clac is done for SMAP avoid having to test the instruction type
entirely?

Is it faulting on valid copyin/copyout the way you have it at the
moment?  I don't quite follow what is going on.

> 
> Index: arch/arm64/arm64/trap.c
> ===
> RCS file: /cvs/src/sys/arch/arm64/arm64/trap.c,v
> retrieving revision 1.27
> diff -u -p -r1.27 trap.c
> --- arch/arm64/arm64/trap.c   6 Jan 2020 12:37:30 -   1.27
> +++ arch/arm64/arm64/trap.c   14 Aug 2020 21:05:54 -
> @@ -65,6 +65,14 @@ void do_el0_error(struct trapframe *);
>  
>  void dumpregs(struct trapframe*);
>  
> +/* Check whether we're executing an unprivileged load/store instruction. */
> +static inline int
> +is_unpriv_ldst(uint64_t elr)
> +{
> + uint32_t insn = *(uint32_t *)elr;
> + return ((insn & 0x3f200c00) == 0x38000800);

The value of op1 (bit 26) is not used according to the table in the Arm
ARM.  The mask would be better as 0x3b200c00


> +}
> +
>  static void
>  data_abort(struct trapframe *frame, uint64_t esr, uint64_t far,
>  int lower, int exe)
> @@ -104,8 +112,18 @@ data_abort(struct trapframe *frame, uint
>   /* The top bit tells us which range to use */
>   if ((far >> 63) == 1)
>   map = kernel_map;
> - else
> + else {
> + /*
> +  * Only allow user-space access using
> +  * unprivileged load/store instructions.
> +  */
> + if (!is_unpriv_ldst(frame->tf_elr)) {
> + panic("attempt to access user address"
> +   " 0x%llx from EL1", far);
> + }
> +
>   map = >p_vmspace->vm_map;
> + }
>   }
>  
>   if (exe)
> 
> 



Re: Enable arm64 PAN feature

2020-08-14 Thread Mark Kettenis
> Date: Fri, 14 Aug 2020 14:40:23 +0200 (CEST)
> From: Mark Kettenis 
> 
> I suppose a way to test this properly is to pick a system call and
> replace a copyin() with a direct access?  That will succeed without
> PAN but should fail with PAN enabled right?

So that does indeed work.  However, the result is a hard hang.  So
here as an additional diff that makes sure we panic instead.  The idea
is that all user-space access from the kernel should be done by the
special unprivileged load/store instructions.

Index: arch/arm64/arm64/trap.c
===
RCS file: /cvs/src/sys/arch/arm64/arm64/trap.c,v
retrieving revision 1.27
diff -u -p -r1.27 trap.c
--- arch/arm64/arm64/trap.c 6 Jan 2020 12:37:30 -   1.27
+++ arch/arm64/arm64/trap.c 14 Aug 2020 21:05:54 -
@@ -65,6 +65,14 @@ void do_el0_error(struct trapframe *);
 
 void dumpregs(struct trapframe*);
 
+/* Check whether we're executing an unprivileged load/store instruction. */
+static inline int
+is_unpriv_ldst(uint64_t elr)
+{
+   uint32_t insn = *(uint32_t *)elr;
+   return ((insn & 0x3f200c00) == 0x38000800);
+}
+
 static void
 data_abort(struct trapframe *frame, uint64_t esr, uint64_t far,
 int lower, int exe)
@@ -104,8 +112,18 @@ data_abort(struct trapframe *frame, uint
/* The top bit tells us which range to use */
if ((far >> 63) == 1)
map = kernel_map;
-   else
+   else {
+   /*
+* Only allow user-space access using
+* unprivileged load/store instructions.
+*/
+   if (!is_unpriv_ldst(frame->tf_elr)) {
+   panic("attempt to access user address"
+ " 0x%llx from EL1", far);
+   }
+
map = >p_vmspace->vm_map;
+   }
}
 
if (exe)



Re: Enable arm64 PAN feature

2020-08-14 Thread Mark Kettenis
> Date: Fri, 14 Aug 2020 12:29:51 +1000
> From: Jonathan Gray 
> 
> On Thu, Aug 13, 2020 at 09:17:41PM +0200, Mark Kettenis wrote:
> > ARMv8.1 introduced PAN (Priviliged Access Never) which prevents the
> > kernel from accessing userland data.  This can be bypassed by using
> > special instructions which we already use in copyin(9) and friends.
> > So we can simply turn this feature on if the CPU supports it.
> > 
> > Tested on an Odroid-C4 which has Cortex-A55 cores that have PAN
> > support.
> > 
> > ok?
> 
> This should be changing PSTATE.PAN as well.  Can you force an
> acess of this type to be sure the permission fault occurs?
> 
> >From the Arm ARM:
> 
> "When the value of PSTATE.PAN is 1, any privileged data access from
> EL1, or EL2 when HCR_EL2.E2H is 1, to a virtual memory address that
> is accessible at EL0, generates a Permission fault.
> 
> When the value of PSTATE.PAN is 0, the translation system is the
> same as in Armv8.0.
> 
> When ARMv8.1-PAN is implemented, the SPSR_EL1.PAN, SPSR_EL2.PAN, and
> SPSR_EL3.PAN bits are used for exception returns, and the DSPSR_EL0
> register is used for entry to or exit from Debug state.
> 
> When ARMv8.1-PAN is implemented, the SCTLR_EL1.SPAN and SCTLR_EL2.SPAN
> bits are used to control whether the PAN bit is set on an exception
> to EL1 or EL2."

By clearing SCTRL_EL0.SPAN, PSTATE.PAN will be set on an exception to
EL1.  Yes, this does mean that PSTATE.PAN won't be set until a CPU
returns to userland for the first time.  But from then on PSTATE.PAN
will be set.

I suppose a way to test this properly is to pick a system call and
replace a copyin() with a direct access?  That will succeed without
PAN but should fail with PAN enabled right?

> > Index: arch/arm64/arm64/cpu.c
> > ===
> > RCS file: /cvs/src/sys/arch/arm64/arm64/cpu.c,v
> > retrieving revision 1.38
> > diff -u -p -r1.38 cpu.c
> > --- arch/arm64/arm64/cpu.c  4 Jun 2020 21:18:16 -   1.38
> > +++ arch/arm64/arm64/cpu.c  13 Aug 2020 19:12:30 -
> > @@ -321,6 +321,7 @@ cpu_attach(struct device *parent, struct
> > struct fdt_attach_args *faa = aux;
> > struct cpu_info *ci;
> > uint64_t mpidr = READ_SPECIALREG(mpidr_el1);
> > +   uint64_t id_aa64mmfr1, sctlr;
> > uint32_t opp;
> >  
> > KASSERT(faa->fa_nreg > 0);
> > @@ -393,6 +394,14 @@ cpu_attach(struct device *parent, struct
> > cpu_cpuspeed = cpu_clockspeed;
> > }
> >  
> > +   /* Enable PAN. */
> > +   id_aa64mmfr1 = READ_SPECIALREG(id_aa64mmfr1_el1);
> > +   if (ID_AA64MMFR1_PAN(id_aa64mmfr1) != ID_AA64MMFR1_PAN_NONE) {
> > +   sctlr = READ_SPECIALREG(sctlr_el1);
> > +   sctlr &= ~SCTLR_SPAN;
> > +   WRITE_SPECIALREG(sctlr_el1, sctlr);
> > +   }
> > +
> > /* Initialize debug registers. */
> > WRITE_SPECIALREG(mdscr_el1, DBG_MDSCR_TDCC);
> > WRITE_SPECIALREG(oslar_el1, 0);
> > @@ -522,6 +531,7 @@ cpu_boot_secondary(struct cpu_info *ci)
> >  void
> >  cpu_start_secondary(struct cpu_info *ci)
> >  {
> > +   uint64_t id_aa64mmfr1, sctlr;
> > uint64_t tcr;
> > int s;
> >  
> > @@ -543,6 +553,14 @@ cpu_start_secondary(struct cpu_info *ci)
> > tcr |= TCR_T0SZ(64 - USER_SPACE_BITS);
> > tcr |= TCR_A1;
> > WRITE_SPECIALREG(tcr_el1, tcr);
> > +
> > +   /* Enable PAN. */
> > +   id_aa64mmfr1 = READ_SPECIALREG(id_aa64mmfr1_el1);
> > +   if (ID_AA64MMFR1_PAN(id_aa64mmfr1) != ID_AA64MMFR1_PAN_NONE) {
> > +   sctlr = READ_SPECIALREG(sctlr_el1);
> > +   sctlr &= ~SCTLR_SPAN;
> > +   WRITE_SPECIALREG(sctlr_el1, sctlr);
> > +   }
> >  
> > /* Initialize debug registers. */
> > WRITE_SPECIALREG(mdscr_el1, DBG_MDSCR_TDCC);
> > Index: arch/arm64/include/armreg.h
> > ===
> > RCS file: /cvs/src/sys/arch/arm64/include/armreg.h,v
> > retrieving revision 1.11
> > diff -u -p -r1.11 armreg.h
> > --- arch/arm64/include/armreg.h 5 Jun 2020 22:14:25 -   1.11
> > +++ arch/arm64/include/armreg.h 13 Aug 2020 19:12:30 -
> > @@ -451,6 +451,7 @@
> >  #defineSCTLR_nTWI  0x0001
> >  #defineSCTLR_nTWE  0x0004
> >  #defineSCTLR_WXN   0x0008
> > +#defineSCTLR_SPAN  0x0080
> >  #defineSCTLR_EOE   0x0100
> >  #defineSCTLR_EE0x0200
> >  #defineSCTLR_UCI   0x0400
> > @@ -478,6 +479,7 @@
> >  #definePSR_D   0x0200
> >  #definePSR_IL  0x0010
> >  #definePSR_SS  0x0020
> > +#definePSR_PAN 0x0040
> >  #definePSR_V   0x1000
> >  #definePSR_C   0x2000
> >  #definePSR_Z   0x4000
> > 
> > 
> 



Re: Enable arm64 PAN feature

2020-08-13 Thread Jonathan Gray
On Thu, Aug 13, 2020 at 09:17:41PM +0200, Mark Kettenis wrote:
> ARMv8.1 introduced PAN (Priviliged Access Never) which prevents the
> kernel from accessing userland data.  This can be bypassed by using
> special instructions which we already use in copyin(9) and friends.
> So we can simply turn this feature on if the CPU supports it.
> 
> Tested on an Odroid-C4 which has Cortex-A55 cores that have PAN
> support.
> 
> ok?

This should be changing PSTATE.PAN as well.  Can you force an
acess of this type to be sure the permission fault occurs?

>From the Arm ARM:

"When the value of PSTATE.PAN is 1, any privileged data access from
EL1, or EL2 when HCR_EL2.E2H is 1, to a virtual memory address that
is accessible at EL0, generates a Permission fault.

When the value of PSTATE.PAN is 0, the translation system is the
same as in Armv8.0.

When ARMv8.1-PAN is implemented, the SPSR_EL1.PAN, SPSR_EL2.PAN, and
SPSR_EL3.PAN bits are used for exception returns, and the DSPSR_EL0
register is used for entry to or exit from Debug state.

When ARMv8.1-PAN is implemented, the SCTLR_EL1.SPAN and SCTLR_EL2.SPAN
bits are used to control whether the PAN bit is set on an exception
to EL1 or EL2."

> 
> 
> Index: arch/arm64/arm64/cpu.c
> ===
> RCS file: /cvs/src/sys/arch/arm64/arm64/cpu.c,v
> retrieving revision 1.38
> diff -u -p -r1.38 cpu.c
> --- arch/arm64/arm64/cpu.c4 Jun 2020 21:18:16 -   1.38
> +++ arch/arm64/arm64/cpu.c13 Aug 2020 19:12:30 -
> @@ -321,6 +321,7 @@ cpu_attach(struct device *parent, struct
>   struct fdt_attach_args *faa = aux;
>   struct cpu_info *ci;
>   uint64_t mpidr = READ_SPECIALREG(mpidr_el1);
> + uint64_t id_aa64mmfr1, sctlr;
>   uint32_t opp;
>  
>   KASSERT(faa->fa_nreg > 0);
> @@ -393,6 +394,14 @@ cpu_attach(struct device *parent, struct
>   cpu_cpuspeed = cpu_clockspeed;
>   }
>  
> + /* Enable PAN. */
> + id_aa64mmfr1 = READ_SPECIALREG(id_aa64mmfr1_el1);
> + if (ID_AA64MMFR1_PAN(id_aa64mmfr1) != ID_AA64MMFR1_PAN_NONE) {
> + sctlr = READ_SPECIALREG(sctlr_el1);
> + sctlr &= ~SCTLR_SPAN;
> + WRITE_SPECIALREG(sctlr_el1, sctlr);
> + }
> +
>   /* Initialize debug registers. */
>   WRITE_SPECIALREG(mdscr_el1, DBG_MDSCR_TDCC);
>   WRITE_SPECIALREG(oslar_el1, 0);
> @@ -522,6 +531,7 @@ cpu_boot_secondary(struct cpu_info *ci)
>  void
>  cpu_start_secondary(struct cpu_info *ci)
>  {
> + uint64_t id_aa64mmfr1, sctlr;
>   uint64_t tcr;
>   int s;
>  
> @@ -543,6 +553,14 @@ cpu_start_secondary(struct cpu_info *ci)
>   tcr |= TCR_T0SZ(64 - USER_SPACE_BITS);
>   tcr |= TCR_A1;
>   WRITE_SPECIALREG(tcr_el1, tcr);
> +
> + /* Enable PAN. */
> + id_aa64mmfr1 = READ_SPECIALREG(id_aa64mmfr1_el1);
> + if (ID_AA64MMFR1_PAN(id_aa64mmfr1) != ID_AA64MMFR1_PAN_NONE) {
> + sctlr = READ_SPECIALREG(sctlr_el1);
> + sctlr &= ~SCTLR_SPAN;
> + WRITE_SPECIALREG(sctlr_el1, sctlr);
> + }
>  
>   /* Initialize debug registers. */
>   WRITE_SPECIALREG(mdscr_el1, DBG_MDSCR_TDCC);
> Index: arch/arm64/include/armreg.h
> ===
> RCS file: /cvs/src/sys/arch/arm64/include/armreg.h,v
> retrieving revision 1.11
> diff -u -p -r1.11 armreg.h
> --- arch/arm64/include/armreg.h   5 Jun 2020 22:14:25 -   1.11
> +++ arch/arm64/include/armreg.h   13 Aug 2020 19:12:30 -
> @@ -451,6 +451,7 @@
>  #define  SCTLR_nTWI  0x0001
>  #define  SCTLR_nTWE  0x0004
>  #define  SCTLR_WXN   0x0008
> +#define  SCTLR_SPAN  0x0080
>  #define  SCTLR_EOE   0x0100
>  #define  SCTLR_EE0x0200
>  #define  SCTLR_UCI   0x0400
> @@ -478,6 +479,7 @@
>  #define  PSR_D   0x0200
>  #define  PSR_IL  0x0010
>  #define  PSR_SS  0x0020
> +#define  PSR_PAN 0x0040
>  #define  PSR_V   0x1000
>  #define  PSR_C   0x2000
>  #define  PSR_Z   0x4000
> 
> 



Re: Enable arm64 PAN feature

2020-08-13 Thread Mark Kettenis
> Date: Thu, 13 Aug 2020 22:52:57 +0200
> From: Patrick Wildt 
> 
> On Thu, Aug 13, 2020 at 09:17:41PM +0200, Mark Kettenis wrote:
> > ARMv8.1 introduced PAN (Priviliged Access Never) which prevents the
> > kernel from accessing userland data.  This can be bypassed by using
> > special instructions which we already use in copyin(9) and friends.
> > So we can simply turn this feature on if the CPU supports it.
> > 
> > Tested on an Odroid-C4 which has Cortex-A55 cores that have PAN
> > support.
> > 
> > ok?
> > 
> 
> So if I read this right, the SPAN bit makes that an exception to
> kernel/hypervisor mode sets the PAN bit in the PSTATE.  Exception
> also means "interrupt", or is this only syscall?  I think intrs
> are also exceptions...
> 
> Essentially, everytime we switch to EL1/EL2 PAN will be enabled?

Yes!

> Sounds good to me, ok patrick@

Thanks.  I'll wait a bit to give drahn@ and jsg@ the opportunity to
chime in.

> > Index: arch/arm64/arm64/cpu.c
> > ===
> > RCS file: /cvs/src/sys/arch/arm64/arm64/cpu.c,v
> > retrieving revision 1.38
> > diff -u -p -r1.38 cpu.c
> > --- arch/arm64/arm64/cpu.c  4 Jun 2020 21:18:16 -   1.38
> > +++ arch/arm64/arm64/cpu.c  13 Aug 2020 19:12:30 -
> > @@ -321,6 +321,7 @@ cpu_attach(struct device *parent, struct
> > struct fdt_attach_args *faa = aux;
> > struct cpu_info *ci;
> > uint64_t mpidr = READ_SPECIALREG(mpidr_el1);
> > +   uint64_t id_aa64mmfr1, sctlr;
> > uint32_t opp;
> >  
> > KASSERT(faa->fa_nreg > 0);
> > @@ -393,6 +394,14 @@ cpu_attach(struct device *parent, struct
> > cpu_cpuspeed = cpu_clockspeed;
> > }
> >  
> > +   /* Enable PAN. */
> > +   id_aa64mmfr1 = READ_SPECIALREG(id_aa64mmfr1_el1);
> > +   if (ID_AA64MMFR1_PAN(id_aa64mmfr1) != ID_AA64MMFR1_PAN_NONE) {
> > +   sctlr = READ_SPECIALREG(sctlr_el1);
> > +   sctlr &= ~SCTLR_SPAN;
> > +   WRITE_SPECIALREG(sctlr_el1, sctlr);
> > +   }
> > +
> > /* Initialize debug registers. */
> > WRITE_SPECIALREG(mdscr_el1, DBG_MDSCR_TDCC);
> > WRITE_SPECIALREG(oslar_el1, 0);
> > @@ -522,6 +531,7 @@ cpu_boot_secondary(struct cpu_info *ci)
> >  void
> >  cpu_start_secondary(struct cpu_info *ci)
> >  {
> > +   uint64_t id_aa64mmfr1, sctlr;
> > uint64_t tcr;
> > int s;
> >  
> > @@ -543,6 +553,14 @@ cpu_start_secondary(struct cpu_info *ci)
> > tcr |= TCR_T0SZ(64 - USER_SPACE_BITS);
> > tcr |= TCR_A1;
> > WRITE_SPECIALREG(tcr_el1, tcr);
> > +
> > +   /* Enable PAN. */
> > +   id_aa64mmfr1 = READ_SPECIALREG(id_aa64mmfr1_el1);
> > +   if (ID_AA64MMFR1_PAN(id_aa64mmfr1) != ID_AA64MMFR1_PAN_NONE) {
> > +   sctlr = READ_SPECIALREG(sctlr_el1);
> > +   sctlr &= ~SCTLR_SPAN;
> > +   WRITE_SPECIALREG(sctlr_el1, sctlr);
> > +   }
> >  
> > /* Initialize debug registers. */
> > WRITE_SPECIALREG(mdscr_el1, DBG_MDSCR_TDCC);
> > Index: arch/arm64/include/armreg.h
> > ===
> > RCS file: /cvs/src/sys/arch/arm64/include/armreg.h,v
> > retrieving revision 1.11
> > diff -u -p -r1.11 armreg.h
> > --- arch/arm64/include/armreg.h 5 Jun 2020 22:14:25 -   1.11
> > +++ arch/arm64/include/armreg.h 13 Aug 2020 19:12:30 -
> > @@ -451,6 +451,7 @@
> >  #defineSCTLR_nTWI  0x0001
> >  #defineSCTLR_nTWE  0x0004
> >  #defineSCTLR_WXN   0x0008
> > +#defineSCTLR_SPAN  0x0080
> >  #defineSCTLR_EOE   0x0100
> >  #defineSCTLR_EE0x0200
> >  #defineSCTLR_UCI   0x0400
> > @@ -478,6 +479,7 @@
> >  #definePSR_D   0x0200
> >  #definePSR_IL  0x0010
> >  #definePSR_SS  0x0020
> > +#definePSR_PAN 0x0040
> >  #definePSR_V   0x1000
> >  #definePSR_C   0x2000
> >  #definePSR_Z   0x4000
> > 
> 



Re: Enable arm64 PAN feature

2020-08-13 Thread Patrick Wildt
On Thu, Aug 13, 2020 at 09:17:41PM +0200, Mark Kettenis wrote:
> ARMv8.1 introduced PAN (Priviliged Access Never) which prevents the
> kernel from accessing userland data.  This can be bypassed by using
> special instructions which we already use in copyin(9) and friends.
> So we can simply turn this feature on if the CPU supports it.
> 
> Tested on an Odroid-C4 which has Cortex-A55 cores that have PAN
> support.
> 
> ok?
> 

So if I read this right, the SPAN bit makes that an exception to
kernel/hypervisor mode sets the PAN bit in the PSTATE.  Exception
also means "interrupt", or is this only syscall?  I think intrs
are also exceptions...

Essentially, everytime we switch to EL1/EL2 PAN will be enabled?
Sounds good to me, ok patrick@

> 
> Index: arch/arm64/arm64/cpu.c
> ===
> RCS file: /cvs/src/sys/arch/arm64/arm64/cpu.c,v
> retrieving revision 1.38
> diff -u -p -r1.38 cpu.c
> --- arch/arm64/arm64/cpu.c4 Jun 2020 21:18:16 -   1.38
> +++ arch/arm64/arm64/cpu.c13 Aug 2020 19:12:30 -
> @@ -321,6 +321,7 @@ cpu_attach(struct device *parent, struct
>   struct fdt_attach_args *faa = aux;
>   struct cpu_info *ci;
>   uint64_t mpidr = READ_SPECIALREG(mpidr_el1);
> + uint64_t id_aa64mmfr1, sctlr;
>   uint32_t opp;
>  
>   KASSERT(faa->fa_nreg > 0);
> @@ -393,6 +394,14 @@ cpu_attach(struct device *parent, struct
>   cpu_cpuspeed = cpu_clockspeed;
>   }
>  
> + /* Enable PAN. */
> + id_aa64mmfr1 = READ_SPECIALREG(id_aa64mmfr1_el1);
> + if (ID_AA64MMFR1_PAN(id_aa64mmfr1) != ID_AA64MMFR1_PAN_NONE) {
> + sctlr = READ_SPECIALREG(sctlr_el1);
> + sctlr &= ~SCTLR_SPAN;
> + WRITE_SPECIALREG(sctlr_el1, sctlr);
> + }
> +
>   /* Initialize debug registers. */
>   WRITE_SPECIALREG(mdscr_el1, DBG_MDSCR_TDCC);
>   WRITE_SPECIALREG(oslar_el1, 0);
> @@ -522,6 +531,7 @@ cpu_boot_secondary(struct cpu_info *ci)
>  void
>  cpu_start_secondary(struct cpu_info *ci)
>  {
> + uint64_t id_aa64mmfr1, sctlr;
>   uint64_t tcr;
>   int s;
>  
> @@ -543,6 +553,14 @@ cpu_start_secondary(struct cpu_info *ci)
>   tcr |= TCR_T0SZ(64 - USER_SPACE_BITS);
>   tcr |= TCR_A1;
>   WRITE_SPECIALREG(tcr_el1, tcr);
> +
> + /* Enable PAN. */
> + id_aa64mmfr1 = READ_SPECIALREG(id_aa64mmfr1_el1);
> + if (ID_AA64MMFR1_PAN(id_aa64mmfr1) != ID_AA64MMFR1_PAN_NONE) {
> + sctlr = READ_SPECIALREG(sctlr_el1);
> + sctlr &= ~SCTLR_SPAN;
> + WRITE_SPECIALREG(sctlr_el1, sctlr);
> + }
>  
>   /* Initialize debug registers. */
>   WRITE_SPECIALREG(mdscr_el1, DBG_MDSCR_TDCC);
> Index: arch/arm64/include/armreg.h
> ===
> RCS file: /cvs/src/sys/arch/arm64/include/armreg.h,v
> retrieving revision 1.11
> diff -u -p -r1.11 armreg.h
> --- arch/arm64/include/armreg.h   5 Jun 2020 22:14:25 -   1.11
> +++ arch/arm64/include/armreg.h   13 Aug 2020 19:12:30 -
> @@ -451,6 +451,7 @@
>  #define  SCTLR_nTWI  0x0001
>  #define  SCTLR_nTWE  0x0004
>  #define  SCTLR_WXN   0x0008
> +#define  SCTLR_SPAN  0x0080
>  #define  SCTLR_EOE   0x0100
>  #define  SCTLR_EE0x0200
>  #define  SCTLR_UCI   0x0400
> @@ -478,6 +479,7 @@
>  #define  PSR_D   0x0200
>  #define  PSR_IL  0x0010
>  #define  PSR_SS  0x0020
> +#define  PSR_PAN 0x0040
>  #define  PSR_V   0x1000
>  #define  PSR_C   0x2000
>  #define  PSR_Z   0x4000
>