Re: [Xen-devel] [PATCHv2 for-4.10] xen/arm: guest_walk: Fix check again the IPS

2017-10-11 Thread Sergej Proskurin
Hi Julien,

On 10/11/2017 04:57 PM, Julien Grall wrote:
> 
> 
> On 11/10/17 15:51, Sergej Proskurin wrote:
>> Hi Julien,
> 
> Hi,
> 
>> On 10/11/2017 04:29 PM, Julien Grall wrote:
>>> The function get_ipa_output_size is check whether the input size
>>> configured by the guest is valid and will return it.
>>>
>>> The check is done with the IPS already shifted against
>>> TCR_EL1_IPS_48_BIT. However the constant has been defined with the
>>> shift included, resulting the check always been false.
>>>
>>> Fix it by doing the check on the non-shifted value.
>>>
>>> This was introduced by commit 7d623b358a "arm/mem_access: Add
>>> long-descriptor
>>> based gpt" introduced software page-table walk for stage-1.
>>>
>>> Note that the IPS code is now surrounded with #ifdef CONFIG_ARM_64
>>> because the Arm32 compiler will complain of shift bigger than the width
>>> of the variable. This is fine as the code is executed for 64-bit
>>> domain only.
>>
>> This is a bit controversial as compared to your review comments to the
>> initial implementation. You did not want to see any #define
>> CONFIG_ARM_64 within the code. TCR_EL1 is a 64-bit Register: to prevent
>> compilation issues for Aarch32 systems, why don't you use uint64_t for
>> ips instead of register_t?
> 
> I am fully aware what I said in the previous reviews and I still took
> this decision because you will mix uint64_t and register_t. #ifdef
> CONFIG_ARM_64 is much nicer than mixing types.
> 
> Another way to fix it would be to rework completely the way you did
> introduce TCR_EL1_IPS_*_BIT so you stick with non-shifted value rather
> than shifted one.
> 
> But I don't have time for that and I don't want to see a latent security
> bug in the release.
> 
> Cheers,
> 
>> Thanks,
>> ~Sergej
>>
>>>
>>> Coverity-ID: 1457707
>>> Signed-off-by: Julien Grall <julien.gr...@linaro.org>
>>>

Reviewed-by: Sergej Proskurin <prosku...@sec.in.tum.de>

>>> ---
>>>
>>> Cc: Sergej Proskurin <prosku...@sec.in.tum.de>
>>>
>>>  Changes in v2:
>>>  - Fix compilation on Arm32
>>> ---
>>>   xen/arch/arm/guest_walk.c | 8 +---
>>>   1 file changed, 5 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/guest_walk.c b/xen/arch/arm/guest_walk.c
>>> index c38bedcf65..4d1ea0cdc1 100644
>>> --- a/xen/arch/arm/guest_walk.c
>>> +++ b/xen/arch/arm/guest_walk.c
>>> @@ -185,7 +185,8 @@ static int guest_walk_sd(const struct vcpu *v,
>>>   static int get_ipa_output_size(struct domain *d, register_t tcr,
>>>  unsigned int *output_size)
>>>   {
>>> -    unsigned int ips;
>>> +#ifdef CONFIG_ARM_64
>>> +    register_t ips;
>>>     static const unsigned int ipa_sizes[7] = {
>>>   TCR_EL1_IPS_32_BIT_VAL,
>>> @@ -200,7 +201,7 @@ static int get_ipa_output_size(struct domain *d,
>>> register_t tcr,
>>>   if ( is_64bit_domain(d) )
>>>   {
>>>   /* Get the intermediate physical address size. */
>>> -    ips = (tcr & TCR_EL1_IPS_MASK) >> TCR_EL1_IPS_SHIFT;
>>> +    ips = tcr & TCR_EL1_IPS_MASK;
>>>     /*
>>>    * Return an error on reserved IPA output-sizes and if the IPA
>>> @@ -211,9 +212,10 @@ static int get_ipa_output_size(struct domain *d,
>>> register_t tcr,
>>>   if ( ips > TCR_EL1_IPS_48_BIT )
>>>   return -EFAULT;
>>>   -    *output_size = ipa_sizes[ips];
>>> +    *output_size = ipa_sizes[ips >> TCR_EL1_IPS_SHIFT];
>>>   }
>>>   else
>>> +#endif
>>>   *output_size = TCR_EL1_IPS_40_BIT_VAL;
>>>     return 0;
>>
> 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCHv2 for-4.10] xen/arm: guest_walk: Fix check again the IPS

2017-10-11 Thread Sergej Proskurin
Hi Julien,


On 10/11/2017 04:29 PM, Julien Grall wrote:
> The function get_ipa_output_size is check whether the input size
> configured by the guest is valid and will return it.
>
> The check is done with the IPS already shifted against
> TCR_EL1_IPS_48_BIT. However the constant has been defined with the
> shift included, resulting the check always been false.
>
> Fix it by doing the check on the non-shifted value.
>
> This was introduced by commit 7d623b358a "arm/mem_access: Add long-descriptor
> based gpt" introduced software page-table walk for stage-1.
>
> Note that the IPS code is now surrounded with #ifdef CONFIG_ARM_64
> because the Arm32 compiler will complain of shift bigger than the width
> of the variable. This is fine as the code is executed for 64-bit domain only.

This is a bit controversial as compared to your review comments to the
initial implementation. You did not want to see any #define
CONFIG_ARM_64 within the code. TCR_EL1 is a 64-bit Register: to prevent
compilation issues for Aarch32 systems, why don't you use uint64_t for
ips instead of register_t?

Thanks,
~Sergej

>
> Coverity-ID: 1457707
> Signed-off-by: Julien Grall <julien.gr...@linaro.org>
>
> ---
>
> Cc: Sergej Proskurin <prosku...@sec.in.tum.de>
>
> Changes in v2:
> - Fix compilation on Arm32
> ---
>  xen/arch/arm/guest_walk.c | 8 +---
>  1 file changed, 5 insertions(+), 3 deletions(-)
>
> diff --git a/xen/arch/arm/guest_walk.c b/xen/arch/arm/guest_walk.c
> index c38bedcf65..4d1ea0cdc1 100644
> --- a/xen/arch/arm/guest_walk.c
> +++ b/xen/arch/arm/guest_walk.c
> @@ -185,7 +185,8 @@ static int guest_walk_sd(const struct vcpu *v,
>  static int get_ipa_output_size(struct domain *d, register_t tcr,
> unsigned int *output_size)
>  {
> -unsigned int ips;
> +#ifdef CONFIG_ARM_64
> +register_t ips;
>  
>  static const unsigned int ipa_sizes[7] = {
>  TCR_EL1_IPS_32_BIT_VAL,
> @@ -200,7 +201,7 @@ static int get_ipa_output_size(struct domain *d, 
> register_t tcr,
>  if ( is_64bit_domain(d) )
>  {
>  /* Get the intermediate physical address size. */
> -ips = (tcr & TCR_EL1_IPS_MASK) >> TCR_EL1_IPS_SHIFT;
> +ips = tcr & TCR_EL1_IPS_MASK;
>  
>  /*
>   * Return an error on reserved IPA output-sizes and if the IPA
> @@ -211,9 +212,10 @@ static int get_ipa_output_size(struct domain *d, 
> register_t tcr,
>  if ( ips > TCR_EL1_IPS_48_BIT )
>  return -EFAULT;
>  
> -*output_size = ipa_sizes[ips];
> +*output_size = ipa_sizes[ips >> TCR_EL1_IPS_SHIFT];
>  }
>  else
> +#endif
>  *output_size = TCR_EL1_IPS_40_BIT_VAL;
>  
>  return 0;

-- 
Sergej Proskurin, M.Sc.
Wissenschaftlicher Mitarbeiter

Technische Universität München
Fakultät für Informatik
Lehrstuhl für Sicherheit in der Informatik

Boltzmannstraße 3
85748 Garching (bei München)

Tel. +49 (0)89 289-18592
Fax +49 (0)89 289-18579



___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH FOR-4.10] xen/arm: guest_walk: Fix check again the IPS

2017-10-10 Thread Sergej Proskurin
Hi Julien,


On 10/10/2017 05:20 PM, Julien Grall wrote:
> The function get_ipa_output_size is check whether the input size
> configured by the guest is valid and will return it.
>
> The check is done with the IPS already shifted against
> TCR_EL1_IPS_48_BIT. However the constant has been defined with the
> shift included, resulting the check always been false.

Good fix, thank you!

>
> Fix it by doing the check on the non-shifted value.
>
> This was introduced by commit 7d623b358a "arm/mem_access: Add long-descriptor
> based gpt" introduced software page-table walk for stage-1.
>
> Coverity-ID: 1457707
> Signed-off-by: Julien Grall <julien.gr...@linaro.org>
>
> ---
>
> Cc: Sergej Proskurin <prosku...@sec.in.tum.de>

Acked-by: Sergej Proskurin <prosku...@sec.in.tum.de>

Thanks,
~Sergej

> ---
>  xen/arch/arm/guest_walk.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/xen/arch/arm/guest_walk.c b/xen/arch/arm/guest_walk.c
> index c38bedcf65..a6de325572 100644
> --- a/xen/arch/arm/guest_walk.c
> +++ b/xen/arch/arm/guest_walk.c
> @@ -185,7 +185,7 @@ static int guest_walk_sd(const struct vcpu *v,
>  static int get_ipa_output_size(struct domain *d, register_t tcr,
> unsigned int *output_size)
>  {
> -unsigned int ips;
> +register_t ips;
>  
>  static const unsigned int ipa_sizes[7] = {
>  TCR_EL1_IPS_32_BIT_VAL,
> @@ -200,7 +200,7 @@ static int get_ipa_output_size(struct domain *d, 
> register_t tcr,
>  if ( is_64bit_domain(d) )
>  {
>  /* Get the intermediate physical address size. */
> -ips = (tcr & TCR_EL1_IPS_MASK) >> TCR_EL1_IPS_SHIFT;
> +ips = tcr & TCR_EL1_IPS_MASK;
>  
>  /*
>   * Return an error on reserved IPA output-sizes and if the IPA
> @@ -211,7 +211,7 @@ static int get_ipa_output_size(struct domain *d, 
> register_t tcr,
>  if ( ips > TCR_EL1_IPS_48_BIT )
>  return -EFAULT;
>  
> -*output_size = ipa_sizes[ips];
> +*output_size = ipa_sizes[ips >> TCR_EL1_IPS_SHIFT];
>  }
>  else
>  *output_size = TCR_EL1_IPS_40_BIT_VAL;

-- 
Sergej Proskurin, M.Sc.
Wissenschaftlicher Mitarbeiter

Technische Universität München
Fakultät für Informatik
Lehrstuhl für Sicherheit in der Informatik

Boltzmannstraße 3
85748 Garching (bei München)

Tel. +49 (0)89 289-18592
Fax +49 (0)89 289-18579


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v4 00/39] arm/altp2m: Introducing altp2m to ARM

2017-10-07 Thread Sergej Proskurin
Hi Julien,

On 10/07/2017 12:29 PM, Julien Grall wrote:
> 
> 
> On 07/10/2017 11:18, Sergej Proskurin wrote:
>> Hi all,
> 
> Hello Sergej,
> 
>>
>> just wanted to friendly remind you about the next altp2m on ARM patch
>> series, since it has been submitted for over a month now and got
>> somewhat lost on xen-devel.
>>
>> I understand that it is too late to get this patch series into 4.10.
>> Yet, I would like to queue the series for 4.11. Please let me know if I
>> should wait for reviews until the end of the extended code freeze
>> deadline.
> 
> This is in my queue, I will have a look once I am done with 4.10 patches.
> 

Alright, thank you.

Cheers,
~Sergej

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v4 00/39] arm/altp2m: Introducing altp2m to ARM

2017-10-07 Thread Sergej Proskurin
Hi all,

just wanted to friendly remind you about the next altp2m on ARM patch
series, since it has been submitted for over a month now and got
somewhat lost on xen-devel.

I understand that it is too late to get this patch series into 4.10.
Yet, I would like to queue the series for 4.11. Please let me know if I
should wait for reviews until the end of the extended code freeze deadline.

Thanks,
~Sergej

On 08/30/2017 08:32 PM, Sergej Proskurin wrote:
> Hi all,
> 
> The following patch series can be found on Github[0] and is part of my
> contribution to last year's Google Summer of Code (GSoC)[1]. My project is
> managed by the organization The Honeynet Project. As part of GSoC, I was being
> supervised by the Xen maintainer Tamas K. Lengyel <ta...@tklengyel.com>, 
> George
> D. Webster, and Steven Maresca.
> 
> In this patch series, we provide an implementation of the altp2m subsystem for
> ARM. Our implementation is based on the altp2m subsystem for x86, providing
> additional --alternate-- views on the guest's physical memory by means of the
> ARM 2nd stage translation mechanism. The patches introduce new HVMOPs and
> extend the p2m subsystem. Also, we extend libxl to support altp2m on ARM and
> modify xen-access to test the suggested functionality.
> 
> To be more precise, altp2m allows to create and switch to additional p2m views
> (i.e. gfn to mfn mappings). These views can be manipulated and activated as
> will through the provided HVMOPs. In this way, the active guest instance in
> question can seamlessly proceed execution without noticing that anything has
> changed. The prime scope of application of altp2m is Virtual Machine
> Introspection, where guest systems are analyzed from the outside of the VM.
> 
> Altp2m can be activated by means of the guest control parameter "altp2m" on 
> x86
> and ARM architectures. For use-cases requiring purely external access to
> altp2m, this patch allows to specify if the altp2m interface should be 
> external
> only.
> 
> This version is a revised version of v3 that has been submitted in 2016. It
> incorporates the comments of the previous patch series. Although the previous
> version has been submitted last year, I have kept the comments of the
> individual patches. Both the purpose and changes from v3 to v4 are stated
> inside the individual commits.
> 
> Best regards,
> ~Sergej
> 
> [0] https://github.com/sergej-proskurin/xen (branch arm-altp2m-v4)
> [1] https://summerofcode.withgoogle.com/projects/#4970052843470848
> 
> Sergej Proskurin (38):
>   arm/p2m: Introduce p2m_(switch|restore)_vttbr_and_(g|s)et_flags
>   arm/p2m: Add first altp2m HVMOP stubs
>   arm/p2m: Add hvm_allow_(set|get)_param
>   arm/p2m: Add HVMOP_altp2m_get_domain_state
>   arm/p2m: Introduce p2m_is_(hostp2m|altp2m)
>   arm/p2m: Cosmetic fix - substitute _gfn(ULONG_MAX) for INVALID_GFN
>   arm/p2m: Move hostp2m init/teardown to individual functions
>   arm/p2m: Cosmetic fix - function prototype of p2m_alloc_table
>   arm/p2m: Rename parameter in p2m_alloc_vmid
>   arm/p2m: Change func prototype and impl of p2m_(alloc|free)_vmid
>   altp2m: Move (MAX|INVALID)_ALTP2M to xen/p2m-common.h
>   arm/p2m: Add altp2m init/teardown routines
>   arm/p2m: Add altp2m table flushing routine
>   arm/p2m: Add HVMOP_altp2m_set_domain_state
>   arm/p2m: Add HVMOP_altp2m_create_p2m
>   arm/p2m: Add HVMOP_altp2m_destroy_p2m
>   arm/p2m: Add HVMOP_altp2m_switch_p2m
>   arm/p2m: Add p2m_get_active_p2m macro
>   arm/p2m: Make p2m_restore_state ready for altp2m
>   arm/p2m: Make get_page_from_gva ready for altp2m
>   arm/p2m: Cosmetic fix - __p2m_get_mem_access
>   arm/p2m: Make p2m_mem_access_check ready for altp2m
>   arm/p2m: Cosmetic fix - function prototypes
>   arm/p2m: Make p2m_put_l3_page ready for altp2m
>   arm/p2m: Modify reference count only if hostp2m active
>   arm/p2m: Add HVMOP_altp2m_set_mem_access
>   arm/p2m: Add altp2m_propagate_change
>   altp2m: Rename p2m_altp2m_check to altp2m_check
>   x86/altp2m: Move altp2m_check to altp2m.c
>   arm/altp2m: Move altp2m_check to altp2m.h
>   arm/altp2m: Introduce altp2m_switch_vcpu_altp2m_by_id
>   arm/altp2m: Make altp2m_vcpu_idx ready for altp2m
>   arm/p2m: Add altp2m paging mechanism
>   arm/p2m: Add HVMOP_altp2m_change_gfn
>   arm/p2m: Adjust debug information to altp2m
>   altp2m: Allow activating altp2m on ARM domains
>   arm/xen-access: Extend xen-access for altp2m on ARM
>   arm/xen-access: Add test of xc_altp2m_change_gfn
> 
> Tamas K Lengyel (1):
>   altp2m: Document external-only use on ARM
> 
>  docs/man/xl.cfg.pod.5.in|   8 +-
>  tools/libxl/libxl.h |  10 +-
>  tools/libxl/libxl_dom.c |  16 +-
>  tools/libxl/libxl_type

[Xen-devel] [RFC PATCH 0/4] Introduce Single-Stepping to ARMv8

2017-09-05 Thread Sergej Proskurin
Hi all,

This patch series introduces support for single-stepping of guest VMs on
ARMv8. For detailed information about the single-stepping mechanism on
ARMv8, we refer the reader to ARM DDI 0487B.a Section D2.12 (Software
Step exceptions).

Our current implementation supports a rudimentary single-stepping
functionality of the guest's kernel executing in EL1 and is by no means
complete. While the hardware architecture also allows to single-step
EL2, we do not yet implement this feature. Another limitation is that
the current implementation does not yet support single-stepping over
load-exclusive/store-exclusive instructions (LDAXR/STXR), as noticed by
James Morse [0].

This patch series has been submitted as an RFC patch in order to discuss
potential implementation flaws. In the following, we describe the test
environment and appeared effects, the solution to which we would like to
find out.

Our general idea is to make use of the single-stepping functionality as
a means for tracing the guest kernel, executing in EL1. Therefore, we
would like to inject SMC instructions to desired locations within the
guest kernel's text segment. That is, upon execution of injected SMC
instructions, the guest would trap into the hypervisor, where we can
trace the trapping event. While trapped in the hypervisor, we would like
to replace the previously injected SMC with the original instruction (as
to ensure correct guest execution), single-step this original
instruction, and finally place back the SMC instruction before we
continue guest execution.

Our test case is a simple kernel module, which we inject inside of the
guest. Upon trapping the SMC instruction in Xen, we activate
single-stepping and increase the guest's PC by four to continue
execution.  Now, the issue that we are experiencing is that upon
execution of the SMC instruction, the guest seems to trap into a
synchronous interrupt handler. That is, the next guest instruction that
generates a software step exception is the first instruction of the
interrupt handler; not the next instruction (if we increase the pc by
four). This is deterministic and independent of whether we increment the
PC by four or not (to the instruction following the trapping SMC
instruction). As a result, because of the fact that the guest handles
the interrupt, we cannot single-step the replaced original instruction
until the interrupt handler finishes.

Our tests have shown that before the guest (that is currently configured
to use only one VCPU) generates a software step exception that traps
into the hypervisor at do_trap_guest_sync, the hypervisor interrupts the
guest and executes the handler do_trap_irq. We believe that the
interrupt gets injected by Xen into the guest (e.g., timer interrupt).
Which is the reason, why the next instruction that generates a software
step exception resides in the interrupt handler routine. This happens
deterministically every time the SMC gets executed.

We would like to understand if and how we can suspend guest interrupt
injections (if this is truly the cause of our problems), as long as we
are single-stepping the guest, without causing issues. This approach
would prevent SMC instructions to be followed by an in-guest interrupt
handling procedure and thus facilitate our use case.

It would be of great help if we would discuss the upper issue and
hopefully even find a solution to the presented issue.

Thank you very much in advance.

Cheers,
~Sergej

[0] https://lists.xen.org/archives/html/xen-devel/2017-08/msg00661.html

Sergej Proskurin (4):
  arm/monitor: Introduce monitoring of single-step events
  arm/domctl: Add XEN_DOMCTL_DEBUG_OP_SINGLE_STEP_{ON|OFF}
  arm/traps: Allow trapping on single-step events
  vm_event: Move vm_event_toggle_singlestep to 

 xen/arch/arm/arm64/entry.S   |  2 ++
 xen/arch/arm/domctl.c| 35 
 xen/arch/arm/monitor.c   | 23 ++
 xen/arch/arm/traps.c | 50 +++-
 xen/arch/arm/vm_event.c  | 11 +
 xen/include/asm-arm/domain.h |  3 +++
 xen/include/asm-arm/monitor.h|  5 +++-
 xen/include/asm-arm/perfc_defn.h |  1 +
 xen/include/asm-arm/processor.h  |  2 ++
 xen/include/asm-arm/vm_event.h   |  6 -
 xen/include/asm-x86/vm_event.h   |  3 ---
 xen/include/xen/vm_event.h   |  3 +++
 12 files changed, 133 insertions(+), 11 deletions(-)

--
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [RFC PATCH 4/4] vm_event: Move vm_event_toggle_singlestep to

2017-09-05 Thread Sergej Proskurin
In this commit we move the declaration of the function
vm_event_toggle_singlestep from  to  and
implement the associated functionality on ARM.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Razvan Cojocaru <rcojoc...@bitdefender.com>
Cc: Tamas K Lengyel <ta...@tklengyel.com>
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
Cc: Jan Beulich <jbeul...@suse.com>
Cc: Andrew Cooper <andrew.coop...@citrix.com>
---
 xen/arch/arm/vm_event.c| 11 +++
 xen/include/asm-arm/vm_event.h |  6 --
 xen/include/asm-x86/vm_event.h |  3 ---
 xen/include/xen/vm_event.h |  3 +++
 4 files changed, 14 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/vm_event.c b/xen/arch/arm/vm_event.c
index eaac92078d..a3bb525e9e 100644
--- a/xen/arch/arm/vm_event.c
+++ b/xen/arch/arm/vm_event.c
@@ -47,6 +47,17 @@ void vm_event_monitor_next_interrupt(struct vcpu *v)
 /* Not supported on ARM. */
 }
 
+void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v,
+vm_event_response_t *rsp)
+{
+if ( !(rsp->flags & VM_EVENT_FLAG_TOGGLE_SINGLESTEP) )
+return;
+
+ASSERT(atomic_read(>vm_event_pause_count));
+
+v->arch.single_step = !v->arch.single_step;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-arm/vm_event.h b/xen/include/asm-arm/vm_event.h
index 66f2474fe1..0d7a5446f2 100644
--- a/xen/include/asm-arm/vm_event.h
+++ b/xen/include/asm-arm/vm_event.h
@@ -34,12 +34,6 @@ static inline void vm_event_cleanup_domain(struct domain *d)
 memset(>monitor, 0, sizeof(d->monitor));
 }
 
-static inline void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v,
-  vm_event_response_t *rsp)
-{
-/* Not supported on ARM. */
-}
-
 static inline
 void vm_event_register_write_resume(struct vcpu *v, vm_event_response_t *rsp)
 {
diff --git a/xen/include/asm-x86/vm_event.h b/xen/include/asm-x86/vm_event.h
index 39e73c83ca..139867178a 100644
--- a/xen/include/asm-x86/vm_event.h
+++ b/xen/include/asm-x86/vm_event.h
@@ -40,9 +40,6 @@ int vm_event_init_domain(struct domain *d);
 
 void vm_event_cleanup_domain(struct domain *d);
 
-void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v,
-vm_event_response_t *rsp);
-
 void vm_event_register_write_resume(struct vcpu *v, vm_event_response_t *rsp);
 
 void vm_event_emulate_check(struct vcpu *v, vm_event_response_t *rsp);
diff --git a/xen/include/xen/vm_event.h b/xen/include/xen/vm_event.h
index 2fb39519b1..210aab1b37 100644
--- a/xen/include/xen/vm_event.h
+++ b/xen/include/xen/vm_event.h
@@ -80,6 +80,9 @@ void vm_event_set_registers(struct vcpu *v, 
vm_event_response_t *rsp);
 
 void vm_event_monitor_next_interrupt(struct vcpu *v);
 
+void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v,
+vm_event_response_t *rsp);
+
 #endif /* __VM_EVENT_H__ */
 
 /*
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [RFC PATCH 3/4] arm/traps: Allow trapping on single-step events

2017-09-05 Thread Sergej Proskurin
This commit concludes the single-stepping functionality on ARM by adding
trapping on and setting up single-stepping events of the architecture.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
 xen/arch/arm/arm64/entry.S   |  2 ++
 xen/arch/arm/traps.c | 50 +++-
 xen/include/asm-arm/perfc_defn.h |  1 +
 xen/include/asm-arm/processor.h  |  2 ++
 4 files changed, 54 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/arm64/entry.S b/xen/arch/arm/arm64/entry.S
index 6d99e46f0f..5e89f24494 100644
--- a/xen/arch/arm/arm64/entry.S
+++ b/xen/arch/arm/arm64/entry.S
@@ -138,6 +138,8 @@ lr  .reqx30 /* link register */
 
 bl  leave_hypervisor_tail /* Disables interrupts on return */
 
+bl  setup_single_step
+
 exit_guest \compat
 
 .endif
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index aa838e8e77..9c45b0706e 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -163,7 +163,7 @@ void init_traps(void)
 WRITE_SYSREG((vaddr_t)hyp_traps_vector, VBAR_EL2);
 
 /* Trap Debug and Performance Monitor accesses */
-WRITE_SYSREG(HDCR_TDRA|HDCR_TDOSA|HDCR_TDA|HDCR_TPM|HDCR_TPMCR,
+WRITE_SYSREG(HDCR_TDRA|HDCR_TDOSA|HDCR_TDA|HDCR_TPM|HDCR_TPMCR|HDCR_TDE,
  MDCR_EL2);
 
 /* Trap CP15 c15 used for implementation defined registers */
@@ -1332,6 +1332,20 @@ int do_bug_frame(struct cpu_user_regs *regs, vaddr_t pc)
 }
 
 #ifdef CONFIG_ARM_64
+static void do_trap_ss(struct cpu_user_regs *regs, const union hsr hsr)
+{
+int rc = 0;
+
+/* XXX: We do not support single-stepping of EL2, yet. */
+BUG_ON(hyp_mode(regs));
+
+if ( current->domain->arch.monitor.single_step_enabled )
+rc = monitor_ss();
+
+if ( rc != 1 )
+inject_undef_exception(regs, hsr);
+}
+
 static void do_trap_brk(struct cpu_user_regs *regs, const union hsr hsr)
 {
 /* HCR_EL2.TGE and MDCR_EL2.TDE are not set so we never receive
@@ -2943,6 +2957,12 @@ asmlinkage void do_trap_guest_sync(struct cpu_user_regs 
*regs)
 perfc_incr(trap_dabt);
 do_trap_data_abort_guest(regs, hsr);
 break;
+#ifdef CONFIG_ARM_64
+case HSR_EC_SS_LOWER_EL:
+perfc_incr(trap_ss);
+do_trap_ss(regs, hsr);
+break;
+#endif
 
 default:
 gprintk(XENLOG_WARNING,
@@ -2999,6 +3019,34 @@ asmlinkage void do_trap_fiq(struct cpu_user_regs *regs)
 gic_interrupt(regs, 1);
 }
 
+asmlinkage void setup_single_step(void)
+{
+uint32_t mdscr, mdcr;
+struct vcpu *v = current;
+struct cpu_user_regs *regs = guest_cpu_user_regs();
+
+#define MDSCR_EL1_SS(_AC(1,U) << 0)
+#define SPSR_EL2_SS (_AC(1,U) << 21)
+
+mdscr = READ_SYSREG(MDSCR_EL1);
+mdcr = READ_SYSREG(MDCR_EL2);
+
+if ( unlikely(v->arch.single_step) )
+{
+mdcr |= HDCR_TDE;
+mdscr |= MDSCR_EL1_SS;
+regs->cpsr |= SPSR_EL2_SS;
+}
+else
+{
+mdcr &= ~HDCR_TDE;
+mdscr &= ~MDSCR_EL1_SS;
+}
+
+WRITE_SYSREG(mdscr, MDSCR_EL1);
+WRITE_SYSREG(mdcr, MDCR_EL2);
+}
+
 asmlinkage void leave_hypervisor_tail(void)
 {
 while (1)
diff --git a/xen/include/asm-arm/perfc_defn.h b/xen/include/asm-arm/perfc_defn.h
index 5f957ee6ec..46b82e4fee 100644
--- a/xen/include/asm-arm/perfc_defn.h
+++ b/xen/include/asm-arm/perfc_defn.h
@@ -18,6 +18,7 @@ PERFCOUNTER(trap_hvc32,"trap: 32-bit hvc")
 PERFCOUNTER(trap_smc64,"trap: 64-bit smc")
 PERFCOUNTER(trap_hvc64,"trap: 64-bit hvc")
 PERFCOUNTER(trap_sysreg,   "trap: sysreg access")
+PERFCOUNTER(trap_ss,   "trap: software step")
 #endif
 PERFCOUNTER(trap_iabt, "trap: guest instr abort")
 PERFCOUNTER(trap_dabt, "trap: guest data abort")
diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
index 9f7a42f86b..3e0ec4f537 100644
--- a/xen/include/asm-arm/processor.h
+++ b/xen/include/asm-arm/processor.h
@@ -323,6 +323,8 @@
 #define HSR_EC_DATA_ABORT_LOWER_EL  0x24
 #define HSR_EC_DATA_ABORT_CURR_EL   0x25
 #ifdef CONFIG_ARM_64
+#define HSR_EC_SS_LOWER_EL  0x32
+#define HSR_EC_SS_CURR_EL   0x33
 #define HSR_EC_BRK  0x3c
 #endif
 
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [RFC PATCH 2/4] arm/domctl: Add XEN_DOMCTL_DEBUG_OP_SINGLE_STEP_{ON|OFF}

2017-09-05 Thread Sergej Proskurin
This commit adds the domctl that is required to enable single-stepping
on ARM.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
 xen/arch/arm/domctl.c| 35 +++
 xen/include/asm-arm/domain.h |  2 ++
 2 files changed, 37 insertions(+)

diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 971caecd58..f640519b5c 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -20,6 +20,28 @@ void arch_get_domain_info(const struct domain *d,
 info->flags |= XEN_DOMINF_hap;
 }
 
+int debug_do_domctl(struct vcpu *v, int32_t op)
+{
+int rc;
+
+switch ( op )
+{
+case XEN_DOMCTL_DEBUG_OP_SINGLE_STEP_ON:
+case XEN_DOMCTL_DEBUG_OP_SINGLE_STEP_OFF:
+/* XXX: check whether the cpu supports singlestepping. */
+
+rc = 0;
+vcpu_pause(v);
+v->arch.single_step = (op == XEN_DOMCTL_DEBUG_OP_SINGLE_STEP_ON);
+vcpu_unpause(v); /* guest will latch new state */
+break;
+default:
+rc = -ENOSYS;
+}
+
+return rc;
+}
+
 long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
 XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
@@ -114,6 +136,19 @@ long arch_do_domctl(struct xen_domctl *domctl, struct 
domain *d,
 
 return 0;
 }
+case XEN_DOMCTL_debug_op:
+{
+struct vcpu *v;
+
+if ( (domctl->u.debug_op.vcpu >= d->max_vcpus) ||
+ ((v = d->vcpu[domctl->u.debug_op.vcpu]) == NULL) )
+return -EINVAL;
+
+if ( (v == current) )
+return -EINVAL;
+
+return debug_do_domctl(v, domctl->u.debug_op.op);
+}
 
 case XEN_DOMCTL_disable_migrate:
 d->disable_migrate = domctl->u.disable_migrate.disable;
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 0e4ee2956e..105bad0b5b 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -282,6 +282,8 @@ struct arch_vcpu
 struct vtimer phys_timer;
 struct vtimer virt_timer;
 bool_t vtimer_initialized;
+
+bool single_step;
 }  __cacheline_aligned;
 
 void vcpu_show_execution_state(struct vcpu *);
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [RFC PATCH 1/4] arm/monitor: Introduce monitoring of single-step events

2017-09-05 Thread Sergej Proskurin
In this commit, we extend the capabilities of the monitor to allow
tracing of single-step events on ARM.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Razvan Cojocaru <rcojoc...@bitdefender.com>
Cc: Tamas K Lengyel <ta...@tklengyel.com>
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
 xen/arch/arm/monitor.c| 23 +++
 xen/include/asm-arm/domain.h  |  1 +
 xen/include/asm-arm/monitor.h |  5 -
 3 files changed, 28 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/monitor.c b/xen/arch/arm/monitor.c
index 59ce8f635f..a4466c9574 100644
--- a/xen/arch/arm/monitor.c
+++ b/xen/arch/arm/monitor.c
@@ -32,6 +32,20 @@ int arch_monitor_domctl_event(struct domain *d,
 
 switch ( mop->event )
 {
+case XEN_DOMCTL_MONITOR_EVENT_SINGLESTEP:
+{
+bool old_status = ad->monitor.single_step_enabled;
+
+if ( unlikely(old_status == requested_status) )
+return -EEXIST;
+
+domain_pause(d);
+ad->monitor.single_step_enabled = requested_status;
+domain_unpause(d);
+
+break;
+}
+
 case XEN_DOMCTL_MONITOR_EVENT_PRIVILEGED_CALL:
 {
 bool_t old_status = ad->monitor.privileged_call_enabled;
@@ -66,6 +80,15 @@ int monitor_smc(void)
 return monitor_traps(current, 1, );
 }
 
+int monitor_ss(void)
+{
+vm_event_request_t req = {
+.reason = VM_EVENT_REASON_SINGLESTEP,
+};
+
+return monitor_traps(current, 1, );
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 8dfc1d1ec2..0e4ee2956e 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -143,6 +143,7 @@ struct arch_domain
 
 /* Monitor options */
 struct {
+uint8_t single_step_enabled : 1;
 uint8_t privileged_call_enabled : 1;
 } monitor;
 }  __cacheline_aligned;
diff --git a/xen/include/asm-arm/monitor.h b/xen/include/asm-arm/monitor.h
index 7567be66bd..66c7fe14fe 100644
--- a/xen/include/asm-arm/monitor.h
+++ b/xen/include/asm-arm/monitor.h
@@ -57,12 +57,15 @@ static inline uint32_t arch_monitor_get_capabilities(struct 
domain *d)
 {
 uint32_t capabilities = 0;
 
-capabilities = (1U << XEN_DOMCTL_MONITOR_EVENT_GUEST_REQUEST |
+capabilities = (1U << XEN_DOMCTL_MONITOR_EVENT_SINGLESTEP |
+1U << XEN_DOMCTL_MONITOR_EVENT_GUEST_REQUEST |
 1U << XEN_DOMCTL_MONITOR_EVENT_PRIVILEGED_CALL);
 
 return capabilities;
 }
 
+int monitor_ss(void);
+
 int monitor_smc(void);
 
 #endif /* __ASM_ARM_MONITOR_H__ */
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v4 10/11] public: add XENFEAT_ARM_SMCCC_supported feature

2017-09-04 Thread Sergej Proskurin
Hi Julien,


On 09/04/2017 08:07 AM, Julien Grall wrote:
> Hello,
>
> Sorry for the formatting, writing from my phone. Ki
>
> On Thu, 31 Aug 2017, 22:18 Sergej Proskurin <prosku...@sec.in.tum.de> wrote:
>

[...]

>
> On your first mail, you started with "smc injection doesn't work", then "I
> replace instruction" and now you mention about single-stepping.
>
> This doesn't help at all to understand what you are doing and really not
> related to this thread.
>
> So can you please details exactly what you are doing rather than giving
> bits by bits?
>

I will provide more information in a separate thread soon so that the
actual issue, hopefully, will become clearer. Thank you.

>> I use SMC instructions as the guest can register for BRK events. The
>> guest cannot register for SMC events. So, in order stay stealthy towards
>> the guest and also not to cope with BRK re-injections, SMC's seemed to
>> be the right choice :
>
> I have already said that using SMC is a pretty bad idea when Tamas added
> the trapping and you guys still seem to think it is a good idea...

I did not know about this conversation with Tamas. Why do you believe
that using SMC instructions is not a good idea? Could you please refer
me to the particular thread? Thank you.

>>>>> Current code in hypervisor will always inject undefined instruction
>>>>> exception when you  call SMC (unless you installed VM monitor for the
>>>>> guest). Also, it will not increase PC. So, if you'll try to remove
>>>>> inject_undef_exception() call, you'll get into an infinite loop.
>>>>>
>>>> I have a registered SMC monitor running in dom0 that does not reinject
>>>> the undefined instruction exception in do_trap_smc(). So there is no
>>>> indefinite loop at this point. What I see is that as soon as my code in
>>>> xen-access (dom0) increments the trapped guest PC by 4 (and also if it
>>>> doesn't) the next instruction inside the guest will be inside the undef
>>>> instruction handler (I can see that because I have implemented a single
>>>> stepping mechanism for AArch64 in Xen that gets activated right after
>>>> the guest executes the injected SMC instruction).
>>> That's strange. Can you print whole vCPU state to determine that PC
>>> points to the right place? Also you can check DFAR. Probably you can
>>> even dump memory pointed by DFAR to make sure that you written back
>>> correct instruction.
>> Yea, I do that. And both the SMC injection, as well as further vCPU
>> state seems to be correct at this point.
>>
>> Today, I saw an interesting behavior in my single-stepping
>> implementation, which is the reason for my late reply. I can't explain
>> what is going wrong, yet. So I will need to further investigate this
>> behavior and post and RFC for the single-stepping mechanism as to put
>> more eyes on the issue. Maybe, this will help solve it.
>>
>> But anyway, thank you very much for your help! I really appreciate it :)
>>
> You probably want to look at
> https://lists.xen.org/archives/html/xen-devel/2017-08/msg00661.html and
> maybe sync-up with this person if you are not working with him.

Thanks, for mentioning that. Florian is a student of mine who has also
looked at single-stepping on ARMv8. We have collaborated on this topic
together. I will take over on that, as his work goes slightly into a
different direction.

Thanks,
~Sergej


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v4 10/11] public: add XENFEAT_ARM_SMCCC_supported feature

2017-08-31 Thread Sergej Proskurin
Hi Volodymyr,


On 08/31/2017 04:58 PM, Volodymyr Babchuk wrote:
> Hi Sergej
>
> On 31.08.17 16:51, Sergej Proskurin wrote:
>> Hi Volodymyr,
>>
>>
>> On 08/31/2017 02:44 PM, Volodymyr Babchuk wrote:
>>> Hello Sergej,
>>>
>>> On 31.08.17 15:20, Sergej Proskurin wrote:
>>>> Hi Volodymyr, hi Julien,
>>>>
>>>>
>>>> On 08/24/2017 07:25 PM, Julien Grall wrote:
>>>>>
>>>>>
>>>>> On 21/08/17 21:27, Volodymyr Babchuk wrote:
>>>>>> This feature indicates that hypervisor is compatible with ARM
>>>>>> SMC calling convention. Hypervisor will not inject an undefined
>>>>>> instruction exception if an invalid SMC function were called and
>>>>>> will not crash a domain if an invlalid HVC functions were called.
>>>>>
>>>>> s/invlalid/invalid/
>>>>>
>>>>> The last sentence is misleading. Xen will still inject and undefined
>>>>> instruction for some SMC/HVC. You may want to rework it to make it
>>>>> clear.
>>>>>
>>>>
>>>> Now that you say that Xen will still inject an undefined instruction
>>>> exception for some SMCs, I have a to ask for which exactly?
>>> For ones that are compatible with ARM SMCCC [1]. E.g if you are
>>> running SMCCC-compatible system and you are calling SMC/HVC with
>>> immediate value 0, then you are safe.
>>>
>>
>> Alright, as far as I understand this is exactly what I do right now. I
>> inject an SMC that is encoded as 0xD403.
> Actually, this patch series are not merged yet, so no SMCCC support
> right. But this should not a problem in your case.
>
>>>> I might be off topic here, so please tell me if you believe this is
>>>> not
>>>> the right place for this question. In this case I will open an new
>>>> thread. Right now, I am working with the previous implementation of
>>>> do_trap_smc that was extended in this patch. Yet, as far as I
>>>> understand, the behavior should not change, which is why I am asking
>>>> this quesiton in this thread.
>>> If you are talking about forwarding SMC exception to VM monitor, then
>>> yes, that should not change.
>>
>> Yes, exactly. Sorry, I forgot to mention that I have a modified
>> xen-access version running in dom0 that registers an SMC monitor and
>> also increases the PC by 4 (or dependent on the case, simply leaves it
>> as it is) on every SMC trap.
> Aha, I see. I never was able to test this feature fully. I played with
> my own VM monitor, when I tried to offload SMC handling to another
> domain. But I had to comment out most of the VM monitor code in XEN.
>
>>>
>>>> Currently, I am working on SMC guest injections and trying to
>>>> understand
>>>> the resulting behavior. Every time, right after the execution of an
>>>> injected SMC instruction, the guest traps into the undefined
>>>> instruction
>>>> exception handler in EL1 and I simply don't understand why. As far
>>>> as I
>>>> understand, as soon an injected SMC instruction gets executed, it
>>>> should
>>>> _transparently_ trap into the hypervisor (assuming MDCR_EL2.TDE is
>>>> set).
>>>> As soon as the hypervisor returns (e.g. to PC+4 or to the trapping PC
>>>> that now contains the original instruction instead of the injected
>>>> SMC),
>>>> the guest should simply continue its execution.
>>> Hm. What do you mean under "SMC instruction injection?".
>>
>> My code runs in dom0 and "injects" an SMC instruction to predefined
>> addresses inside the guest as to simulate software breakpoints. By this,
>> I mean that the code replaces the original guest instruction at a
>> certain address with an SMC. Think of a debugger that uses software
>> breakpoints. The idea is to put back the original instruction right
>> after the SMC gets called, so that the guest can continue with its
>> execution. You can find more information about that in [0], yet please
>> consider that I try to trap the SMC directly in Xen instead of
>> TrustZone.
> Yep, I see. Immediate question: do you flush icache after you put
> original instruction back? 

Yeap. But the current behavior does not let me to go this far, as I the
system jumps into the interrupt handler and single-steps the handler
instead of the instruction of interest.

> Then I can't see, why this should not work. If 

Re: [Xen-devel] [PATCH v4 11/39] altp2m: Move (MAX|INVALID)_ALTP2M to xen/p2m-common.h

2017-08-31 Thread Sergej Proskurin
Hi Jan,


On 08/31/2017 12:19 PM, Jan Beulich wrote:
 On 31.08.17 at 11:49,  wrote:
>> On 08/31/2017 10:04 AM, Jan Beulich wrote:
>> On 30.08.17 at 20:32,  wrote:
 We move the macros (MAX|INVALID)_ALTP2M out of x86-related code to
 common code, as the following patches will make use of them on ARM.
>>> But both seem not impossible to be require arch-specific values.
>> Right. The general idea at this point is to move as much of altp2m
>> functionality/configuration as possible into a common place. Yet, if you
>> believe that, e.g., the number of altp2m views could/should diverge
>> between both architectures, I will gladly move the defines back into
>> arch-related parts. However, we need to consider that while x86/Intel
>> supports up to 512 entries for EPT pointers as part of the VMCS, we are
>> quite flexible on ARM: we manage the views entirely in software and
>> hence on ARM we can easily keep up with Intel's specification. This
>> allows us to hold parts of the altp2m configuration in a unified place.
>> Or do you believe this is not the right way to go?
> Well, you've basically answered this yourself: Why would you
> want to constrain ARM just because of VMX restrictions? Requiring
> all architectures to surface the same constants (regardless of
> actual values) is all you need to be able to commonize code.

Alright, I will remove the upper constants from common code in v5.

Thanks,
~Sergej

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v4 10/11] public: add XENFEAT_ARM_SMCCC_supported feature

2017-08-31 Thread Sergej Proskurin
Hi Volodymyr,


On 08/31/2017 02:44 PM, Volodymyr Babchuk wrote:
> Hello Sergej,
>
> On 31.08.17 15:20, Sergej Proskurin wrote:
>> Hi Volodymyr, hi Julien,
>>
>>
>> On 08/24/2017 07:25 PM, Julien Grall wrote:
>>>
>>>
>>> On 21/08/17 21:27, Volodymyr Babchuk wrote:
>>>> This feature indicates that hypervisor is compatible with ARM
>>>> SMC calling convention. Hypervisor will not inject an undefined
>>>> instruction exception if an invalid SMC function were called and
>>>> will not crash a domain if an invlalid HVC functions were called.
>>>
>>> s/invlalid/invalid/
>>>
>>> The last sentence is misleading. Xen will still inject and undefined
>>> instruction for some SMC/HVC. You may want to rework it to make it
>>> clear.
>>>
>>
>> Now that you say that Xen will still inject an undefined instruction
>> exception for some SMCs, I have a to ask for which exactly?
> For ones that are compatible with ARM SMCCC [1]. E.g if you are
> running SMCCC-compatible system and you are calling SMC/HVC with
> immediate value 0, then you are safe.
>

Alright, as far as I understand this is exactly what I do right now. I
inject an SMC that is encoded as 0xD403.

>> I might be off topic here, so please tell me if you believe this is not
>> the right place for this question. In this case I will open an new
>> thread. Right now, I am working with the previous implementation of
>> do_trap_smc that was extended in this patch. Yet, as far as I
>> understand, the behavior should not change, which is why I am asking
>> this quesiton in this thread.
> If you are talking about forwarding SMC exception to VM monitor, then
> yes, that should not change.

Yes, exactly. Sorry, I forgot to mention that I have a modified
xen-access version running in dom0 that registers an SMC monitor and
also increases the PC by 4 (or dependent on the case, simply leaves it
as it is) on every SMC trap.

>
>> Currently, I am working on SMC guest injections and trying to understand
>> the resulting behavior. Every time, right after the execution of an
>> injected SMC instruction, the guest traps into the undefined instruction
>> exception handler in EL1 and I simply don't understand why. As far as I
>> understand, as soon an injected SMC instruction gets executed, it should
>> _transparently_ trap into the hypervisor (assuming MDCR_EL2.TDE is set).
>> As soon as the hypervisor returns (e.g. to PC+4 or to the trapping PC
>> that now contains the original instruction instead of the injected SMC),
>> the guest should simply continue its execution.
> Hm. What do you mean under "SMC instruction injection?".

My code runs in dom0 and "injects" an SMC instruction to predefined
addresses inside the guest as to simulate software breakpoints. By this,
I mean that the code replaces the original guest instruction at a
certain address with an SMC. Think of a debugger that uses software
breakpoints. The idea is to put back the original instruction right
after the SMC gets called, so that the guest can continue with its
execution. You can find more information about that in [0], yet please
consider that I try to trap the SMC directly in Xen instead of TrustZone.

> Current code in hypervisor will always inject undefined instruction
> exception when you  call SMC (unless you installed VM monitor for the
> guest). Also, it will not increase PC. So, if you'll try to remove
> inject_undef_exception() call, you'll get into an infinite loop.
>

I have a registered SMC monitor running in dom0 that does not reinject
the undefined instruction exception in do_trap_smc(). So there is no
indefinite loop at this point. What I see is that as soon as my code in
xen-access (dom0) increments the trapped guest PC by 4 (and also if it
doesn't) the next instruction inside the guest will be inside the undef
instruction handler (I can see that because I have implemented a single
stepping mechanism for AArch64 in Xen that gets activated right after
the guest executes the injected SMC instruction).

>> Now, according to ARM DDI0487B.a D1-1873, the following holds: "If
>> HCR_EL2.TSC or HCR.TSC traps attempted EL1 execution of SMC instructions
>> to EL2, that trap has priority over this disable". So this means that if
>> SMCs are disabled for NS EL1, the guest will trap into the hypervisor on
>> SMC execution. Yet, since SMCs are disabled from NS EL1, the guest will
>> execute an undefined instrcution exception. Which is what I was thinking
>> about is currently happening on my ARMv8 dev board (Lemaker Hikey). On
>> the other hand I believe that it is highly unlikely that the EFI loader
>&g

Re: [Xen-devel] [PATCH v4 10/11] public: add XENFEAT_ARM_SMCCC_supported feature

2017-08-31 Thread Sergej Proskurin
Hi Volodymyr, hi Julien,


On 08/24/2017 07:25 PM, Julien Grall wrote:
>
>
> On 21/08/17 21:27, Volodymyr Babchuk wrote:
>> This feature indicates that hypervisor is compatible with ARM
>> SMC calling convention. Hypervisor will not inject an undefined
>> instruction exception if an invalid SMC function were called and
>> will not crash a domain if an invlalid HVC functions were called.
>
> s/invlalid/invalid/
>
> The last sentence is misleading. Xen will still inject and undefined
> instruction for some SMC/HVC. You may want to rework it to make it clear.
>

Now that you say that Xen will still inject an undefined instruction
exception for some SMCs, I have a to ask for which exactly?
I might be off topic here, so please tell me if you believe this is not
the right place for this question. In this case I will open an new
thread. Right now, I am working with the previous implementation of
do_trap_smc that was extended in this patch. Yet, as far as I
understand, the behavior should not change, which is why I am asking
this quesiton in this thread.

Currently, I am working on SMC guest injections and trying to understand
the resulting behavior. Every time, right after the execution of an
injected SMC instruction, the guest traps into the undefined instruction
exception handler in EL1 and I simply don't understand why. As far as I
understand, as soon an injected SMC instruction gets executed, it should
_transparently_ trap into the hypervisor (assuming MDCR_EL2.TDE is set).
As soon as the hypervisor returns (e.g. to PC+4 or to the trapping PC
that now contains the original instruction instead of the injected SMC),
the guest should simply continue its execution.

Now, according to ARM DDI0487B.a D1-1873, the following holds: "If
HCR_EL2.TSC or HCR.TSC traps attempted EL1 execution of SMC instructions
to EL2, that trap has priority over this disable". So this means that if
SMCs are disabled for NS EL1, the guest will trap into the hypervisor on
SMC execution. Yet, since SMCs are disabled from NS EL1, the guest will
execute an undefined instrcution exception. Which is what I was thinking
about is currently happening on my ARMv8 dev board (Lemaker Hikey). On
the other hand I believe that it is highly unlikely that the EFI loader
explicitly disables SMC's for NS EL1. However, since I don't have access
to SCR_EL3.SMD from EL2, I can't tell whether this is the reason for the
behavior I am experiencing on my board or not.

It would be of great help if you would provide me with some more clarity
on my case. I am sure that I have missed something that simply needs
clarification. Thank you very much in advance.

Thanks,
~Sergej


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v4 11/39] altp2m: Move (MAX|INVALID)_ALTP2M to xen/p2m-common.h

2017-08-31 Thread Sergej Proskurin
Hi Jan,


On 08/31/2017 10:04 AM, Jan Beulich wrote:
 On 30.08.17 at 20:32,  wrote:
>> We move the macros (MAX|INVALID)_ALTP2M out of x86-related code to
>> common code, as the following patches will make use of them on ARM.
> But both seem not impossible to be require arch-specific values.

Right. The general idea at this point is to move as much of altp2m
functionality/configuration as possible into a common place. Yet, if you
believe that, e.g., the number of altp2m views could/should diverge
between both architectures, I will gladly move the defines back into
arch-related parts. However, we need to consider that while x86/Intel
supports up to 512 entries for EPT pointers as part of the VMCS, we are
quite flexible on ARM: we manage the views entirely in software and
hence on ARM we can easily keep up with Intel's specification. This
allows us to hold parts of the altp2m configuration in a unified place.
Or do you believe this is not the right way to go?

Thanks,
~Sergej


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v4 39/39] arm/xen-access: Add test of xc_altp2m_change_gfn

2017-08-30 Thread Sergej Proskurin
Hi Razvan,


[...]

>> +
>> +*gfn_new = ++(xenaccess->max_gpfn);
> Unnecessary parentheses.
>

Thanks.

>> +rc = xc_domain_populate_physmap_exact(xenaccess->xc_handle, domain_id, 
>> 1, 0, 0, gfn_new);
>> +if ( rc < 0 )
>> +goto err;
>> +
>> +/* Copy content of the old gfn into the newly allocated gfn */
>> +rc = xenaccess_copy_gfn(xenaccess, domain_id, *gfn_new, gfn_old);
>> +if ( rc < 0 )
>> +goto err;
>> +
>> +rc = xc_altp2m_change_gfn(xenaccess->xc_handle, domain_id, ap2m_idx, 
>> gfn_old, *gfn_new);
>> +if ( rc < 0 )
>> +goto err;
>> +
>> +return 0;
>> +
>> +err:
>> +xc_domain_decrease_reservation_exact(xenaccess->xc_handle, domain_id, 
>> 1, 0, gfn_new);
>> +
>> +(xenaccess->max_gpfn)--;
> Here too.
>
>> +
>> +return -1;
>> +}
>> +
>> +static int xenaccess_reset_gfn(xc_interface *xch,
>> +   domid_t domain_id,
>> +   unsigned int ap2m_idx,
>> +   xen_pfn_t gfn_old,
>> +   xen_pfn_t gfn_new)
>> +{
>> +int rc;
>> +
>> +/* Reset previous state */
>> +xc_altp2m_change_gfn(xch, domain_id, ap2m_idx, gfn_old, INVALID_GFN);
>> +
>> +/* Invalidate the new gfn */
>> +xc_altp2m_change_gfn(xch, domain_id, ap2m_idx, gfn_new, INVALID_GFN);
> Do these two xc_altp2m_change_gfn() calls not require error checking?
>
>> +
>> +rc = xc_domain_decrease_reservation_exact(xch, domain_id, 1, 0, 
>> _new);
>> +if ( rc < 0 )
>> +return -1;
>> +
>> +(xenaccess->max_gpfn)--;
> Again, please remove the parentheses.
>

Thanks again. I will adjust the implementation for v5.

Cheers,
~Sergej

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v4 29/39] x86/altp2m: Move altp2m_check to altp2m.c

2017-08-30 Thread Sergej Proskurin
Hi Razvan,


On 08/30/2017 08:42 PM, Razvan Cojocaru wrote:
> On 08/30/2017 09:32 PM, Sergej Proskurin wrote:
>> diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
>> index 42e6f09029..66f1d83d84 100644
>> --- a/xen/common/vm_event.c
>> +++ b/xen/common/vm_event.c
>> @@ -29,6 +29,7 @@
>>  #include 
>>  #include 
>>  #include 
>> +#include 
> Any reason why this include has not happened alphabetically (it belongs
> to the  group)?

I must have missed that, thank you. I am going to fix this in v5.

Cheers,
~Sergej

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 04/39] arm/p2m: Add HVMOP_altp2m_get_domain_state

2017-08-30 Thread Sergej Proskurin
This commit adopts the x86 HVMOP_altp2m_get_domain_state implementation.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v3: Removed the "altp2m_enabled" check in HVMOP_altp2m_get_domain_state
case as it has been moved in front of the switch statement in
"do_altp2m_op".

Removed the macro "altp2m_enabled". Instead, check directly for the
HVM_PARAM_ALTP2M param in d->arch.hvm_domain.
---
 xen/arch/arm/hvm.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 6f5f9b41ac..43b8352cb7 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -85,7 +85,8 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
 switch ( a.cmd )
 {
 case HVMOP_altp2m_get_domain_state:
-rc = -EOPNOTSUPP;
+a.u.domain_state.state = altp2m_active(d);
+rc = __copy_to_guest(arg, , 1) ? -EFAULT : 0;
 break;
 
 case HVMOP_altp2m_set_domain_state:
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 22/39] arm/p2m: Make p2m_mem_access_check ready for altp2m

2017-08-30 Thread Sergej Proskurin
This commit extends the function "p2m_mem_access_check" and
"p2m_mem_access_check_and_get_page" to consider altp2m. The function
"p2m_mem_access_check_and_get_page" needs to translate the gva upon the
hostp2m's vttbr, as it contains all valid mappings while the currently
active altp2m view might not have the required gva mapping yet.

Also, the new implementation fills the request buffer to hold
altp2m-related information.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v3: Extended the function "p2m_mem_access_check_and_get_page" to
consider altp2m. Similar to "get_page_from_gva", the function
"p2m_mem_access_check_and_get_page" needs to translate the gva upon
the hostp2m's vttbr. Although, the function "gva_to_ipa" (called in
"p2m_mem_access_check_and_get_page") performs a stage 1 table walk,
it will access page tables residing in memory. Accesses to this
memory are controlled by the underlying 2nd stage translation table
and hence require the original mappings of the hostp2m.

v4: Cosmetic fixes.

Initialized the variable "ipa" in the function
"p2m_mem_access_check_and_get_page" to satisfy compiler warnings.
---
 xen/arch/arm/mem_access.c | 33 +
 1 file changed, 29 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/mem_access.c b/xen/arch/arm/mem_access.c
index 5bc28db8ff..ebc3a86af3 100644
--- a/xen/arch/arm/mem_access.c
+++ b/xen/arch/arm/mem_access.c
@@ -21,6 +21,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 
@@ -108,9 +109,31 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned 
long flag,
 xenmem_access_t xma;
 p2m_type_t t;
 struct page_info *page = NULL;
-struct p2m_domain *p2m = p2m_get_hostp2m(v->domain);
+struct domain *d = v->domain;
+struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+/*
+ * If altp2m is active, we need to translate the gva upon the hostp2m's
+ * vttbr, as it contains all valid mappings while the currently active
+ * altp2m view might not have the required gva mapping yet. Although, the
+ * function gva_to_ipa performs a stage 1 table walk, it will access page
+ * tables residing in memory. Accesses to this memory are controlled by the
+ * underlying 2nd stage translation table and hence require the original
+ * mappings of the hostp2m.
+ */
+if ( unlikely(altp2m_active(d)) )
+{
+unsigned long flags = 0;
+uint64_t ovttbr = READ_SYSREG64(VTTBR_EL2);
+
+p2m_switch_vttbr_and_get_flags(ovttbr, p2m->vttbr, flags);
 
-rc = gva_to_ipa(gva, , flag);
+rc = gva_to_ipa(gva, , flag);
+
+p2m_restore_vttbr_and_set_flags(ovttbr, flags);
+}
+else
+rc = gva_to_ipa(gva, , flag);
 
 /*
  * In case mem_access is active, hardware-based gva_to_ipa translation
@@ -225,13 +248,15 @@ bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, 
const struct npfec npfec)
 xenmem_access_t xma;
 vm_event_request_t *req;
 struct vcpu *v = current;
-struct p2m_domain *p2m = p2m_get_hostp2m(v->domain);
+struct p2m_domain *p2m = p2m_get_active_p2m(v);
 
 /* Mem_access is not in use. */
 if ( !p2m->mem_access_enabled )
 return true;
 
-rc = p2m_get_mem_access(v->domain, gaddr_to_gfn(gpa), );
+p2m_read_lock(p2m);
+rc = __p2m_get_mem_access(p2m, _gfn(paddr_to_pfn(gpa)), );
+p2m_read_unlock(p2m);
 if ( rc )
 return true;
 
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 39/39] arm/xen-access: Add test of xc_altp2m_change_gfn

2017-08-30 Thread Sergej Proskurin
This commit extends xen-access by a simple test of the functionality
provided by "xc_altp2m_change_gfn". The idea is to dynamically remap a
trapping gfn to another mfn, which holds the same content as the
original mfn. On success, the guest will continue to run. Subsequent
altp2m access violations will trap into Xen and be forced by xen-access
to switch to the default view (altp2m[0]) as before. The introduced test
can be invoked by providing the argument "altp2m_remap".

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Razvan Cojocaru <rcojoc...@bitdefender.com>
Cc: Tamas K Lengyel <ta...@tklengyel.com>
Cc: Ian Jackson <ian.jack...@eu.citrix.com>
Cc: Wei Liu <wei.l...@citrix.com>
---
v3: Cosmetic fixes in "xenaccess_copy_gfn" and "xenaccess_change_gfn".

Added munmap in "copy_gfn" in the second error case.

Added option "altp2m_remap" selecting the altp2m-remap test.

v4: Dropped the COMPAT API for mapping foreign memory. Instead, we use the
stable library xenforeignmemory.

Dropped the use of xc_domain_increase_reservation_exact as we do not
need to increase the domain's physical memory. Otherwise, remapping
a page via altp2m would become visible to the guest itself. As long
as we have additional shadow-memory for the guest domain, we do not
need to reserve any additional memory.
---
 tools/tests/xen-access/Makefile |   2 +-
 tools/tests/xen-access/xen-access.c | 182 +++-
 2 files changed, 179 insertions(+), 5 deletions(-)

diff --git a/tools/tests/xen-access/Makefile b/tools/tests/xen-access/Makefile
index e11f639ccf..ab195e233f 100644
--- a/tools/tests/xen-access/Makefile
+++ b/tools/tests/xen-access/Makefile
@@ -26,6 +26,6 @@ clean:
 distclean: clean
 
 xen-access: xen-access.o Makefile
-   $(CC) -o $@ $< $(LDFLAGS) $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) 
$(LDLIBS_libxenevtchn)
+   $(CC) -o $@ $< $(LDFLAGS) $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) 
$(LDLIBS_libxenevtchn) $(LDLIBS_libxenforeignmemory)
 
 -include $(DEPS)
diff --git a/tools/tests/xen-access/xen-access.c 
b/tools/tests/xen-access/xen-access.c
index 481337cacd..f9b9fb6bbf 100644
--- a/tools/tests/xen-access/xen-access.c
+++ b/tools/tests/xen-access/xen-access.c
@@ -41,6 +41,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #if defined(__arm__) || defined(__aarch64__)
 #include 
@@ -49,6 +50,8 @@
 #define START_PFN 0ULL
 #endif
 
+#define INVALID_GFN ~(0UL)
+
 #define DPRINTF(a, b...) fprintf(stderr, a, ## b)
 #define ERROR(a, b...) fprintf(stderr, a "\n", ## b)
 #define PERROR(a, b...) fprintf(stderr, a ": %s\n", ## b, strerror(errno))
@@ -76,12 +79,19 @@ typedef struct vm_event {
 typedef struct xenaccess {
 xc_interface *xc_handle;
 
+xenforeignmemory_handle *fmem;
+
 xen_pfn_t max_gpfn;
 
 vm_event_t vm_event;
+
+unsigned int ap2m_idx;
+xen_pfn_t gfn_old;
+xen_pfn_t gfn_new;
 } xenaccess_t;
 
 static int interrupted;
+static int gfn_changed = 0;
 bool evtchn_bind = 0, evtchn_open = 0, mem_access_enable = 0;
 
 static void close_handler(int sig)
@@ -89,6 +99,104 @@ static void close_handler(int sig)
 interrupted = sig;
 }
 
+static int xenaccess_copy_gfn(xenaccess_t *xenaccess,
+  domid_t domain_id,
+  xen_pfn_t dst_gfn,
+  xen_pfn_t src_gfn)
+{
+void *src_vaddr = NULL;
+void *dst_vaddr = NULL;
+
+src_vaddr = xenforeignmemory_map(xenaccess->fmem, domain_id, PROT_READ,
+ 1, _gfn, NULL);
+if ( src_vaddr == NULL )
+return -1;
+
+dst_vaddr = xenforeignmemory_map(xenaccess->fmem, domain_id, PROT_WRITE,
+ 1, _gfn, NULL);
+if ( dst_vaddr == NULL )
+{
+munmap(src_vaddr, XC_PAGE_SIZE);
+return -1;
+}
+
+memcpy(dst_vaddr, src_vaddr, XC_PAGE_SIZE);
+
+xenforeignmemory_unmap(xenaccess->fmem, src_vaddr, 1);
+xenforeignmemory_unmap(xenaccess->fmem, dst_vaddr, 1);
+
+return 0;
+}
+
+/*
+ * This function allocates and populates a page in the guest's physmap that is
+ * subsequently filled with contents of the trapping address. Finally, through
+ * the invocation of xc_altp2m_change_gfn, the altp2m subsystem changes the gfn
+ * to mfn mapping of the target altp2m view.
+ */
+static int xenaccess_change_gfn(xenaccess_t *xenaccess,
+domid_t domain_id,
+unsigned int ap2m_idx,
+xen_pfn_t gfn_old,
+xen_pfn_t *gfn_new)
+{
+int rc;
+
+/*
+ * We perform this function only once as it is intended to be used for
+ * testing and demonstration purposes. Thus, we signalize that further
+ * altp2m-related traps will not change trappi

[Xen-devel] [PATCH v4 07/39] arm/p2m: Move hostp2m init/teardown to individual functions

2017-08-30 Thread Sergej Proskurin
This commit pulls out generic init/teardown functionality out of
"p2m_init" and "p2m_teardown" into "p2m_init_one", "p2m_teardown_one",
and "p2m_flush_table" functions.  This allows our future implementation
to reuse existing code for the initialization/teardown of altp2m views.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v2: Added the function p2m_flush_table to the previous version.

v3: Removed struct vttbr.

Moved define INVALID_VTTBR to p2m.h.

Exported function prototypes of "p2m_flush_table", "p2m_init_one",
and "p2m_teardown_one" in p2m.h.

Extended the function "p2m_flush_table" by additionally resetting
the fields lowest_mapped_gfn and max_mapped_gfn.

Added a "p2m_flush_tlb" call in "p2m_flush_table". On altp2m reset
in function "altp2m_reset", it is important to flush the TLBs after
clearing the root table pages and before clearing the intermediate
altp2m page tables to prevent illegal access to stalled TLB entries
on currently active VCPUs.

Added a check checking whether p2m->root is NULL in p2m_flush_table.

Renamed the function "p2m_free_one" to "p2m_teardown_one".

Removed resetting p2m->vttbr in "p2m_teardown_one", as it the p2m
will be destroyed afterwards.

Moved call to "p2m_alloc_table" back to "p2m_init_one".

Moved the introduction of the type p2m_class_t out of this patch.

Moved the backpointer to the struct domain out of the struct
p2m_domain.

v4: Replaced the former use of clear_and_clean_page in p2m_flush_table
by a routine that invalidates every p2m entry atomically. This
avoids inconsistencies on CPUs that continue to use the views that
are to be flushed (e.g., see altp2m_reset).

Removed unnecessary initializations in the functions "p2m_init_one"
and "p2m_teardown_one".

Removed the define INVALID_VTTBR as it is not used any more.

Cosmetic fixes.
---
 xen/arch/arm/p2m.c| 74 +++
 xen/include/asm-arm/p2m.h |  9 ++
 2 files changed, 78 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 5e86368010..3a1a38e7af 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1203,27 +1203,65 @@ static void p2m_free_vmid(struct domain *d)
 spin_unlock(_alloc_lock);
 }
 
-void p2m_teardown(struct domain *d)
+/* Reset this p2m table to be empty. */
+void p2m_flush_table(struct p2m_domain *p2m)
 {
-struct p2m_domain *p2m = p2m_get_hostp2m(d);
 struct page_info *pg;
+unsigned int i, j;
+lpae_t *table;
+
+if ( p2m->root )
+{
+/* Clear all concatenated root level pages. */
+for ( i = 0; i < P2M_ROOT_PAGES; i++ )
+{
+table = __map_domain_page(p2m->root + i);
+
+for ( j = 0; j < LPAE_ENTRIES; j++ )
+{
+lpae_t *entry = table + j;
+
+/*
+ * Individual altp2m views can be flushed, whilst altp2m is
+ * active. To avoid inconsistencies on CPUs that continue to
+ * use the views to be flushed (e.g., see altp2m_reset), we
+ * must remove every p2m entry atomically.
+ */
+p2m_remove_pte(entry, p2m->clean_pte);
+}
+}
+}
+
+/*
+ * Flush TLBs before releasing remaining intermediate p2m page tables to
+ * prevent illegal access to stalled TLB entries.
+ */
+p2m_flush_tlb(p2m);
 
+/* Free the rest of the trie pages back to the paging pool. */
 while ( (pg = page_list_remove_head(>pages)) )
 free_domheap_page(pg);
 
+p2m->lowest_mapped_gfn = INVALID_GFN;
+p2m->max_mapped_gfn = _gfn(0);
+}
+
+void p2m_teardown_one(struct p2m_domain *p2m)
+{
+p2m_flush_table(p2m);
+
 if ( p2m->root )
 free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
 
 p2m->root = NULL;
 
-p2m_free_vmid(d);
+p2m_free_vmid(p2m->domain);
 
 radix_tree_destroy(>mem_access_settings, NULL);
 }
 
-int p2m_init(struct domain *d)
+int p2m_init_one(struct domain *d, struct p2m_domain *p2m)
 {
-struct p2m_domain *p2m = p2m_get_hostp2m(d);
 int rc = 0;
 unsigned int cpu;
 
@@ -1268,6 +1306,32 @@ int p2m_init(struct domain *d)
 return rc;
 }
 
+static void p2m_teardown_hostp2m(struct domain *d)
+{
+struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+p2m_teardown_one(p2m);
+}
+
+void p2m_teardown(struct domain *d)
+{
+p2m_teardown_hostp2m(d);
+}
+
+static int p2m_init_hostp2m(struct domain *d)
+{
+struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+p2m->p2m_clas

[Xen-devel] [PATCH v4 27/39] arm/p2m: Add altp2m_propagate_change

2017-08-30 Thread Sergej Proskurin
This commit introduces the function "altp2m_propagate_change" that is
responsible to propagate changes applied to the host's p2m to a specific
or even all altp2m views. In this way, Xen can in-/decrease the guest's
physmem at run-time without leaving the altp2m views with
stalled/invalid entries.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v3: Cosmetic fixes.

Changed the locking mechanism to "p2m_write_lock" inside the
function "altp2m_reset".

Removed TLB flushing and resetting of the max_mapped_gfn
lowest_mapped_gfn fields within the function "altp2m_reset". These
operations are performed in the function "p2m_flush_table".

Protected altp2m_active(d) check in "altp2m_propagate_change".

The function "altp2m_propagate_change" now decides whether an entry
needs to be dropped out of the altp2m view only if the smfn value
equals INVALID_MFN.

Extended the function "altp2m_propagate_change" so that it returns
an int value to the caller. Also, the function "apply_p2m_changes"
checks the return value and fails the entire operation on error.

Moved the funtion "modify_altp2m_range" out of this commit.

v4: Use the functions "p2m_(set|get)_entry" instead of the helpers
"p2m_lookup_attr" and "modify_altp2m_entry".
---
 xen/arch/arm/altp2m.c| 84 
 xen/arch/arm/p2m.c   |  4 +++
 xen/include/asm-arm/altp2m.h |  8 +
 3 files changed, 96 insertions(+)

diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
index 8c3212780a..4883b1323b 100644
--- a/xen/arch/arm/altp2m.c
+++ b/xen/arch/arm/altp2m.c
@@ -123,6 +123,90 @@ int altp2m_set_mem_access(struct domain *d,
 return rc;
 }
 
+static inline void altp2m_reset(struct p2m_domain *p2m)
+{
+p2m_write_lock(p2m);
+p2m_flush_table(p2m);
+p2m_write_unlock(p2m);
+}
+
+int altp2m_propagate_change(struct domain *d,
+gfn_t sgfn,
+unsigned int page_order,
+mfn_t smfn,
+p2m_type_t p2mt,
+p2m_access_t p2ma)
+{
+int rc = 0;
+unsigned int i;
+unsigned int reset_count = 0;
+unsigned int last_reset_idx = ~0;
+struct p2m_domain *p2m;
+mfn_t m;
+
+altp2m_lock(d);
+
+if ( !altp2m_active(d) )
+goto out;
+
+for ( i = 0; i < MAX_ALTP2M; i++ )
+{
+p2m = d->arch.altp2m_p2m[i];
+
+if ( p2m == NULL )
+continue;
+
+/*
+ * Get the altp2m mapping. If the smfn has not been dropped, a valid
+ * altp2m mapping needs to be changed/modified accordingly.
+ */
+p2m_read_lock(p2m);
+m = p2m_get_entry(p2m, sgfn, NULL, NULL, NULL);
+p2m_read_unlock(p2m);
+
+/* Check for a dropped page that may impact this altp2m. */
+if ( mfn_eq(smfn, INVALID_MFN) &&
+ (gfn_x(sgfn) >= gfn_x(p2m->lowest_mapped_gfn)) &&
+ (gfn_x(sgfn) <= gfn_x(p2m->max_mapped_gfn)) )
+{
+if ( !reset_count++ )
+{
+altp2m_reset(p2m);
+last_reset_idx = i;
+}
+else
+{
+/* At least 2 altp2m's impacted, so reset everything. */
+for ( i = 0; i < MAX_ALTP2M; i++ )
+{
+p2m = d->arch.altp2m_p2m[i];
+
+if ( i == last_reset_idx || p2m == NULL )
+continue;
+
+altp2m_reset(p2m);
+}
+goto out;
+}
+}
+else if ( !mfn_eq(m, INVALID_MFN) )
+{
+/* Align the gfn and mfn to the given pager order. */
+sgfn = _gfn(gfn_x(sgfn) & ~((1UL << page_order) - 1));
+smfn = _mfn(mfn_x(smfn) & ~((1UL << page_order) - 1));
+
+p2m_write_lock(p2m);
+rc = p2m_set_entry(p2m, sgfn, (1UL << page_order), smfn, p2mt, 
p2ma);
+p2m_write_unlock(p2m);
+}
+}
+
+out:
+altp2m_unlock(d);
+
+return rc;
+}
+
 static void altp2m_vcpu_reset(struct vcpu *v)
 {
 v->arch.ap2m_idx = INVALID_ALTP2M;
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index e9274c74a8..dcf7be6439 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -958,6 +958,10 @@ static int __p2m_set_entry(struct p2m_domain *p2m,
 else
 rc = 0;
 
+/* Update all affected altp2m views if necessary. */
+if ( p2m_is_hostp2m(p2m) )
+rc = altp2m_propagate_change(p2m->domain, sgfn, page_order, smfn, t, 
a);
+
 out:
 unmap_dom

[Xen-devel] [PATCH v4 17/39] arm/p2m: Add HVMOP_altp2m_switch_p2m

2017-08-30 Thread Sergej Proskurin
Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v3: Extended the function "altp2m_switch_domain_altp2m_by_id" so that if
the guest domain indirectly calles this function, the current vcpu also
changes the altp2m view without performing an explicit context switch.

Exchanged the check "altp2m_vttbr[idx] == INVALID_VTTBR" for
"altp2m_p2m[idx] == NULL" in "altp2m_switch_domain_altp2m_by_id".

v4: ARM supports an external-only interface to the altp2m subsystem,
i.e, the guest does not have access to altp2m. Thus, we don't have
to consider that the current vcpu will not switch its context in the
function "p2m_restore_state". For this reason, we do not check for
whether we are working on the current vcpu in the function
altp2m_switch_domain_altp2m_by_id. If the current guest access
restriction to the altp2m subsystem should change in the future, we
have to update VTTBR_EL2 directly.

Cosmetic fixes.
---
 xen/arch/arm/altp2m.c| 45 
 xen/arch/arm/hvm.c   |  2 +-
 xen/include/asm-arm/altp2m.h |  4 
 3 files changed, 50 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
index 1128e1af16..9a2cf5a018 100644
--- a/xen/arch/arm/altp2m.c
+++ b/xen/arch/arm/altp2m.c
@@ -32,6 +32,51 @@ struct p2m_domain *altp2m_get_altp2m(struct vcpu *v)
 return v->domain->arch.altp2m_p2m[idx];
 }
 
+int altp2m_switch_domain_altp2m_by_id(struct domain *d, unsigned int idx)
+{
+struct vcpu *v;
+int rc = -EINVAL;
+
+if ( idx >= MAX_ALTP2M )
+return rc;
+
+domain_pause_except_self(d);
+
+altp2m_lock(d);
+
+if ( d->arch.altp2m_p2m[idx] != NULL )
+{
+for_each_vcpu( d, v )
+{
+if ( idx == v->arch.ap2m_idx )
+continue;
+
+atomic_dec(_get_altp2m(v)->active_vcpus);
+v->arch.ap2m_idx = idx;
+atomic_inc(_get_altp2m(v)->active_vcpus);
+
+/*
+ * ARM supports an external-only interface to the altp2m subsystem,
+ * i.e, the guest does not have access to altp2m. Thus, we don't
+ * have to consider that the current vcpu will not switch its
+ * context in the function "p2m_restore_state".
+ *
+ * XXX: If the current guest access restriction to the altp2m
+ * subsystem should change in the future, we have to update
+ * VTTBR_EL2 directly.
+ */
+}
+
+rc = 0;
+}
+
+altp2m_unlock(d);
+
+domain_unpause_except_self(d);
+
+return rc;
+}
+
 static void altp2m_vcpu_reset(struct vcpu *v)
 {
 v->arch.ap2m_idx = INVALID_ALTP2M;
diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 4bf2f28a1a..9bddc7e17e 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -135,7 +135,7 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
 break;
 
 case HVMOP_altp2m_switch_p2m:
-rc = -EOPNOTSUPP;
+rc = altp2m_switch_domain_altp2m_by_id(d, a.u.view.view);
 break;
 
 case HVMOP_altp2m_set_mem_access:
diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index 778c6c4f12..d59f704489 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -49,6 +49,10 @@ void altp2m_vcpu_destroy(struct vcpu *v);
 /* Get current alternate p2m table. */
 struct p2m_domain *altp2m_get_altp2m(struct vcpu *v);
 
+/* Switch alternate p2m for entire domain */
+int altp2m_switch_domain_altp2m_by_id(struct domain *d,
+  unsigned int idx);
+
 /* Make a specific alternate p2m valid. */
 int altp2m_init_by_id(struct domain *d,
   unsigned int idx);
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 28/39] altp2m: Rename p2m_altp2m_check to altp2m_check

2017-08-30 Thread Sergej Proskurin
In this commit, we rename the function "p2m_altp2m_check" to
"altp2m_check".  This is a preparation measure for the following commit
which moves the renamed function "altp2m_check" from p2m.c to altp2m.c
in order to group all altp2m-related functions to one spot (which is
altp2m.c). The reason for modifying the function's name is due the
association of the function with the associated .c file.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Jan Beulich <jbeul...@suse.com>
Cc: Andrew Cooper <andrew.coop...@citrix.com>
Cc: Razvan Cojocaru <rcojoc...@bitdefender.com>
Cc: Tamas K Lengyel <ta...@tklengyel.com>
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v4: In this commit, we have pulled the renaming of the function
"p2m_altp2m_check" out of the previous commit "altp2m: Introduce
altp2m_switch_vcpu_altp2m_by_id"
---
 xen/arch/x86/mm/p2m.c | 2 +-
 xen/common/vm_event.c | 2 +-
 xen/include/asm-arm/p2m.h | 2 +-
 xen/include/asm-x86/p2m.h | 2 +-
 4 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index e8a57d118c..d5038ed66b 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1687,7 +1687,7 @@ void p2m_mem_paging_resume(struct domain *d, 
vm_event_response_t *rsp)
 }
 }
 
-void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
+void altp2m_check(struct vcpu *v, uint16_t idx)
 {
 if ( altp2m_active(v->domain) )
 p2m_switch_vcpu_altp2m_by_id(v, idx);
diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index 9291db61c5..42e6f09029 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -418,7 +418,7 @@ void vm_event_resume(struct domain *d, struct 
vm_event_domain *ved)
 
 /* Check for altp2m switch */
 if ( rsp.flags & VM_EVENT_FLAG_ALTERNATE_P2M )
-p2m_altp2m_check(v, rsp.altp2m_idx);
+altp2m_check(v, rsp.altp2m_idx);
 
 if ( rsp.flags & VM_EVENT_FLAG_SET_REGISTERS )
 vm_event_set_registers(v, );
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index d3467daacf..5564473e26 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -186,7 +186,7 @@ typedef enum {
  p2m_to_mask(p2m_map_foreign)))
 
 static inline
-void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
+void altp2m_check(struct vcpu *v, uint16_t idx)
 {
 /* Not supported on ARM. */
 }
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 6395e8fd1d..d1cc65f86d 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -804,7 +804,7 @@ static inline struct p2m_domain *p2m_get_altp2m(struct vcpu 
*v)
 bool_t p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx);
 
 /* Check to see if vcpu should be switched to a different p2m. */
-void p2m_altp2m_check(struct vcpu *v, uint16_t idx);
+void altp2m_check(struct vcpu *v, uint16_t idx);
 
 /* Flush all the alternate p2m's for a domain */
 void p2m_flush_altp2m(struct domain *d);
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 30/39] arm/altp2m: Move altp2m_check to altp2m.h

2017-08-30 Thread Sergej Proskurin
In this commit, we move the function "altp2m_check" from p2m.h to altp2m.h in
order to group all altp2m-related functions in one point, namely in
altp2m.{c|h}. This commit moves solely the arm code.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v4: This commit has been pulled out of the previous commit "altp2m: Introduce
altp2m_switch_vcpu_altp2m_by_id".
---
 xen/include/asm-arm/altp2m.h | 7 +++
 xen/include/asm-arm/p2m.h| 6 --
 2 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index 3e418cb0f0..5a2444e8f8 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -49,6 +49,13 @@ void altp2m_vcpu_destroy(struct vcpu *v);
 /* Get current alternate p2m table. */
 struct p2m_domain *altp2m_get_altp2m(struct vcpu *v);
 
+/* Check to see if vcpu should be switched to a different p2m. */
+static inline
+void altp2m_check(struct vcpu *v, uint16_t idx)
+{
+/* Not supported on ARM. */
+}
+
 /* Switch alternate p2m for entire domain */
 int altp2m_switch_domain_altp2m_by_id(struct domain *d,
   unsigned int idx);
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 5564473e26..5a000d2f67 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -185,12 +185,6 @@ typedef enum {
 (P2M_RAM_TYPES | P2M_GRANT_TYPES |  \
  p2m_to_mask(p2m_map_foreign)))
 
-static inline
-void altp2m_check(struct vcpu *v, uint16_t idx)
-{
-/* Not supported on ARM. */
-}
-
 /* Second stage paging setup, to be called on all CPUs */
 void setup_virt_paging(void);
 
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 25/39] arm/p2m: Modify reference count only if hostp2m active

2017-08-30 Thread Sergej Proskurin
This commit makes sure that the page reference count is updated through
the function "p2m_put_l3_page" only when the entries have been freed
from the host's p2m.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>

v4: Moved the check for the host's p2m from "p2m_free_entry" to
"p2m_put_l3_page".
---
 xen/arch/arm/p2m.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 246250d8c6..e9274c74a8 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -617,7 +617,7 @@ static void p2m_put_l3_page(struct p2m_domain *p2m, const 
lpae_t pte)
  * flush the TLBs if the page is reallocated before the end of
  * this loop.
  */
-if ( p2m_is_foreign(pte.p2m.type) )
+if ( p2m_is_foreign(pte.p2m.type) && p2m_is_hostp2m(p2m) )
 {
 mfn_t mfn = _mfn(pte.p2m.base);
 
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 08/39] arm/p2m: Cosmetic fix - function prototype of p2m_alloc_table

2017-08-30 Thread Sergej Proskurin
The function "p2m_alloc_table" should be able to allocate 2nd stage
translation tables not only for the host's p2m but also for alternate
p2m's.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
Acked-by: Julien Grall <julien.gr...@arm.com>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v2: Removed altp2m table initialization from "p2m_table_init".

v3: Removed initialization of the field d->arch.altp2m_active in
"p2m_table_init" to avoid altp2m initialization throughout different
files.

Merged the function "p2m_alloc_table" and "p2m_table_init".
---
 xen/arch/arm/p2m.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 3a1a38e7af..65dd2772bf 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1112,9 +1112,8 @@ int guest_physmap_remove_page(struct domain *d, gfn_t 
gfn, mfn_t mfn,
 return p2m_remove_mapping(d, gfn, (1 << page_order), mfn);
 }
 
-static int p2m_alloc_table(struct domain *d)
+static int p2m_alloc_table(struct p2m_domain *p2m)
 {
-struct p2m_domain *p2m = p2m_get_hostp2m(d);
 struct page_info *page;
 unsigned int i;
 
@@ -1290,7 +1289,7 @@ int p2m_init_one(struct domain *d, struct p2m_domain *p2m)
 p2m->clean_pte = iommu_enabled &&
 !iommu_has_feature(d, IOMMU_FEAT_COHERENT_WALK);
 
-rc = p2m_alloc_table(d);
+rc = p2m_alloc_table(p2m);
 
 /*
  * Make sure that the type chosen to is able to store the an vCPU ID
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 10/39] arm/p2m: Change func prototype and impl of p2m_(alloc|free)_vmid

2017-08-30 Thread Sergej Proskurin
This commit changes the prototype and implementation of the functions
"p2m_alloc_vmid" and "p2m_free_vmid". The function "p2m_alloc_vmid" does
not expect the struct domain as argument anymore and returns an
allocated vmid. The function "p2m_free_vmid" takes only the vmid that is
to be freed as argument.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v3: Changed function prototypes and implementation of the functions
"p2m_alloc_vmid" and "p2m_free_vmid".

Changes in "p2m_alloc_vmid":
This function does not expect any arguments. Also, in this commit,
the function "p2m_alloc_vmid" returns either the successfully
allocated vmid or the value INVALID_VMID. Thus, it is now the
responsibility of the caller to set the returned vmid in the
associated fields.

Changes in "p2m_free_vmid":
This function expects now only the vmid of type uint8_t.
---
 xen/arch/arm/p2m.c | 33 -
 1 file changed, 12 insertions(+), 21 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 808d99e1e9..ec855341b9 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1162,11 +1162,9 @@ static void p2m_vmid_allocator_init(void)
 set_bit(INVALID_VMID, vmid_mask);
 }
 
-static int p2m_alloc_vmid(struct domain *d)
+static uint8_t p2m_alloc_vmid(void)
 {
-struct p2m_domain *p2m = p2m_get_hostp2m(d);
-
-int rc, vmid;
+uint8_t vmid;
 
 spin_lock(_alloc_lock);
 
@@ -1176,28 +1174,23 @@ static int p2m_alloc_vmid(struct domain *d)
 
 if ( vmid == MAX_VMID )
 {
-rc = -EBUSY;
-printk(XENLOG_ERR "p2m.c: dom%d: VMID pool exhausted\n", d->domain_id);
+vmid = INVALID_VMID;
+printk(XENLOG_ERR "p2m.c: VMID pool exhausted\n");
 goto out;
 }
 
 set_bit(vmid, vmid_mask);
 
-p2m->vmid = vmid;
-
-rc = 0;
-
 out:
 spin_unlock(_alloc_lock);
-return rc;
+return vmid;
 }
 
-static void p2m_free_vmid(struct domain *d)
+static void p2m_free_vmid(uint8_t vmid)
 {
-struct p2m_domain *p2m = p2m_get_hostp2m(d);
 spin_lock(_alloc_lock);
-if ( p2m->vmid != INVALID_VMID )
-clear_bit(p2m->vmid, vmid_mask);
+if ( vmid != INVALID_VMID )
+clear_bit(vmid, vmid_mask);
 
 spin_unlock(_alloc_lock);
 }
@@ -1254,7 +1247,7 @@ void p2m_teardown_one(struct p2m_domain *p2m)
 
 p2m->root = NULL;
 
-p2m_free_vmid(p2m->domain);
+p2m_free_vmid(p2m->vmid);
 
 radix_tree_destroy(>mem_access_settings, NULL);
 }
@@ -1267,11 +1260,9 @@ int p2m_init_one(struct domain *d, struct p2m_domain 
*p2m)
 rwlock_init(>lock);
 INIT_PAGE_LIST_HEAD(>pages);
 
-p2m->vmid = INVALID_VMID;
-
-rc = p2m_alloc_vmid(d);
-if ( rc != 0 )
-return rc;
+p2m->vmid = p2m_alloc_vmid();
+if ( p2m->vmid == INVALID_VMID )
+return -EBUSY;
 
 p2m->domain = d;
 p2m->max_mapped_gfn = _gfn(0);
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 35/39] arm/p2m: Adjust debug information to altp2m

2017-08-30 Thread Sergej Proskurin
Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v2: Dump p2m information of the hostp2m and all altp2m views.

v4: Adjust printk format.
---
 xen/arch/arm/p2m.c | 20 
 1 file changed, 20 insertions(+)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index dcf7be6439..db213bea20 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -103,6 +103,26 @@ void dump_p2m_lookup(struct domain *d, paddr_t addr)
 
 dump_pt_walk(page_to_maddr(p2m->root), addr,
  P2M_ROOT_LEVEL, P2M_ROOT_PAGES);
+printk("\n");
+
+if ( altp2m_active(d) )
+{
+unsigned int i;
+
+for ( i = 0; i < MAX_ALTP2M; i++ )
+{
+if ( d->arch.altp2m_p2m[i] == NULL )
+continue;
+
+p2m = d->arch.altp2m_p2m[i];
+
+printk("AP2M[%u] @ %p mfn:%lx\n",
+i, p2m->root, __page_to_mfn(p2m->root));
+
+dump_pt_walk(page_to_maddr(p2m->root), addr, P2M_ROOT_LEVEL, 
P2M_ROOT_PAGES);
+printk("\n");
+}
+}
 }
 
 void p2m_save_state(struct vcpu *p)
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 32/39] arm/altp2m: Make altp2m_vcpu_idx ready for altp2m

2017-08-30 Thread Sergej Proskurin
Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
 xen/include/asm-arm/altp2m.h | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index f9e14ab1dc..eff6bd5a38 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -35,9 +35,7 @@ static inline bool_t altp2m_active(const struct domain *d)
 /* Alternate p2m VCPU */
 static inline uint16_t altp2m_vcpu_idx(const struct vcpu *v)
 {
-/* Not implemented on ARM, should not be reached. */
-BUG();
-return 0;
+return v->arch.ap2m_idx;
 }
 
 int altp2m_init(struct domain *d);
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 11/39] altp2m: Move (MAX|INVALID)_ALTP2M to xen/p2m-common.h

2017-08-30 Thread Sergej Proskurin
We move the macros (MAX|INVALID)_ALTP2M out of x86-related code to
common code, as the following patches will make use of them on ARM.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Andrew Cooper <andrew.coop...@citrix.com>
Cc: George Dunlap <george.dun...@eu.citrix.com>
Cc: Ian Jackson <ian.jack...@eu.citrix.com>
Cc: Jan Beulich <jbeul...@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.w...@oracle.com>
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Tim Deegan <t...@xen.org>
Cc: Wei Liu <wei.l...@citrix.com>
Cc: Julien Grall <julien.gr...@arm.com>
---
v4: We have introduced this patch to our patch series.
---
 xen/include/asm-arm/altp2m.h| 1 +
 xen/include/asm-x86/domain.h| 3 +--
 xen/include/xen/altp2m-common.h | 8 
 3 files changed, 10 insertions(+), 2 deletions(-)
 create mode 100644 xen/include/xen/altp2m-common.h

diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index 0711796123..66afa959f6 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -20,6 +20,7 @@
 #ifndef __ASM_ARM_ALTP2M_H
 #define __ASM_ARM_ALTP2M_H
 
+#include 
 #include 
 
 /* Alternate p2m on/off per domain */
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index fb8bf17458..1d10f4b59f 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -1,6 +1,7 @@
 #ifndef __ASM_DOMAIN_H__
 #define __ASM_DOMAIN_H__
 
+#include 
 #include 
 #include 
 #include 
@@ -234,8 +235,6 @@ struct paging_vcpu {
 
 #define MAX_NESTEDP2M 10
 
-#define MAX_ALTP2M  10 /* arbitrary */
-#define INVALID_ALTP2M  0x
 #define MAX_EPTP(PAGE_SIZE / sizeof(uint64_t))
 struct p2m_domain;
 struct time_scale {
diff --git a/xen/include/xen/altp2m-common.h b/xen/include/xen/altp2m-common.h
new file mode 100644
index 00..670fb42292
--- /dev/null
+++ b/xen/include/xen/altp2m-common.h
@@ -0,0 +1,8 @@
+#ifndef __XEN_ALTP2M_COMMON_H__
+#define __XEN_ALTP2M_COMMON_H__
+
+#define MAX_ALTP2M  10  /* The system may contain an arbitrary number
+   of altp2m views. */
+#define INVALID_ALTP2M  0x
+
+#endif /* __XEN_ALTP2M_COMMON_H__ */
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 20/39] arm/p2m: Make get_page_from_gva ready for altp2m

2017-08-30 Thread Sergej Proskurin
The function get_page_from_gva uses ARM's hardware support to translate
gva's to machine addresses. This function is used, among others, for
memory regulation purposes, e.g, within the context of memory ballooning.
To ensure correct behavior while altp2m is in use, we use the host's p2m
table for the associated gva to ma translation. This is required at this
point, as altp2m lazily copies pages from the host's p2m and even might
be flushed because of changes to the host's p2m (as it is done within
the context of memory ballooning).

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v3: Cosmetic fixes.

Make use of the p2m_(switch|restore)_vttbr_and_(g|s)et_flags macros
to avoid code duplication.

v4: Remove initialization of the old vttbr outside of the macro
"p2m_switch_vttbr_and_get_flags".
---
 xen/arch/arm/p2m.c | 19 ++-
 1 file changed, 18 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 16c7585ffa..20d7784708 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1470,7 +1470,24 @@ struct page_info *get_page_from_gva(struct vcpu *v, 
vaddr_t va,
 
 p2m_read_lock(p2m);
 
-rc = gvirt_to_maddr(va, , flags);
+/*
+ * If altp2m is active, we need to translate the gva upon the hostp2m's
+ * vttbr, as it contains all valid mappings while the currently active
+ * altp2m view might not have the required gva mapping yet.
+ */
+if ( unlikely(altp2m_active(d)) )
+{
+unsigned long flags = 0;
+uint64_t ovttbr;
+
+p2m_switch_vttbr_and_get_flags(ovttbr, p2m->vttbr, flags);
+
+rc = gvirt_to_maddr(va, , flags);
+
+p2m_restore_vttbr_and_set_flags(ovttbr, flags);
+}
+else
+rc = gvirt_to_maddr(va, , flags);
 
 if ( rc )
 goto err;
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 16/39] arm/p2m: Add HVMOP_altp2m_destroy_p2m

2017-08-30 Thread Sergej Proskurin
Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v2: Substituted the call to tlb_flush for p2m_flush_table.
Added comments.
Cosmetic fixes.

v3: Changed the locking mechanism to "p2m_write_lock" inside the
function "altp2m_destroy_by_id".

Do not flush but rather teardown the altp2m in the function
"altp2m_destroy_by_id".

Exchanged the check "altp2m_vttbr[idx] == INVALID_VTTBR" for
"altp2m_p2m[idx] == NULL" in "altp2m_destroy_by_id".

v4: Removed locking the p2m in "altp2m_destroy_by_id" as the p2m is not
used by anyone else at this point.
---
 xen/arch/arm/altp2m.c| 39 +++
 xen/arch/arm/hvm.c   |  2 +-
 xen/include/asm-arm/altp2m.h |  4 
 3 files changed, 44 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
index 6b1e34709f..1128e1af16 100644
--- a/xen/arch/arm/altp2m.c
+++ b/xen/arch/arm/altp2m.c
@@ -180,6 +180,45 @@ void altp2m_flush_complete(struct domain *d)
 altp2m_unlock(d);
 }
 
+int altp2m_destroy_by_id(struct domain *d, unsigned int idx)
+{
+struct p2m_domain *p2m;
+int rc = -EBUSY;
+
+/*
+ * The altp2m[0] is considered as the hostp2m and is used as a safe harbor
+ * to which you can switch as long as altp2m is active. After deactivating
+ * altp2m, the system switches back to the original hostp2m view. That is,
+ * altp2m[0] should only be destroyed/flushed/freed, when altp2m is
+ * deactivated.
+ */
+if ( !idx || idx >= MAX_ALTP2M )
+return rc;
+
+domain_pause_except_self(d);
+
+altp2m_lock(d);
+
+if ( d->arch.altp2m_p2m[idx] != NULL )
+{
+p2m = d->arch.altp2m_p2m[idx];
+
+if ( !_atomic_read(p2m->active_vcpus) )
+{
+p2m_teardown_one(p2m);
+xfree(p2m);
+d->arch.altp2m_p2m[idx] = NULL;
+rc = 0;
+}
+}
+
+altp2m_unlock(d);
+
+domain_unpause_except_self(d);
+
+return rc;
+}
+
 void altp2m_teardown(struct domain *d)
 {
 unsigned int i;
diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index caa2e1b516..4bf2f28a1a 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -131,7 +131,7 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
 break;
 
 case HVMOP_altp2m_destroy_p2m:
-rc = -EOPNOTSUPP;
+rc = altp2m_destroy_by_id(d, a.u.view.view);
 break;
 
 case HVMOP_altp2m_switch_p2m:
diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index b9719f9d5b..778c6c4f12 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -60,4 +60,8 @@ int altp2m_init_next_available(struct domain *d,
 /* Flush all the alternate p2m's for a domain. */
 void altp2m_flush_complete(struct domain *d);
 
+/* Make a specific alternate p2m invalid */
+int altp2m_destroy_by_id(struct domain *d,
+ unsigned int idx);
+
 #endif /* __ASM_ARM_ALTP2M_H */
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 19/39] arm/p2m: Make p2m_restore_state ready for altp2m

2017-08-30 Thread Sergej Proskurin
This commit adapts the function "p2m_restore_state" in a way that the
currently active altp2m table is considered during state restoration.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
Acked-by: Julien Grall <julien.gr...@arm.com>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v3: Moved declaration of "altp2m_switch_domain_altp2m_by_id" out of this
patch.

v4: Moved the variable "p2m", as to satisfy compiler warnings, prohibiting
mixing declarations and code.
---
 xen/arch/arm/p2m.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index e017e2972e..16c7585ffa 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -112,8 +112,8 @@ void p2m_save_state(struct vcpu *p)
 
 void p2m_restore_state(struct vcpu *n)
 {
-struct p2m_domain *p2m = p2m_get_hostp2m(n->domain);
 uint8_t *last_vcpu_ran;
+struct p2m_domain *p2m = p2m_get_active_p2m(n);
 
 if ( is_idle_vcpu(n) )
 return;
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 29/39] x86/altp2m: Move altp2m_check to altp2m.c

2017-08-30 Thread Sergej Proskurin
In this commit, we move the function "altp2m_check" from p2m.c to altp2m.c in
order to group all altp2m-related functions in one point, namely in
altp2m.{c|h}. This commit moves solely the x86 code.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: George Dunlap <george.dun...@eu.citrix.com>
Cc: Jan Beulich <jbeul...@suse.com>
Cc: Andrew Cooper <andrew.coop...@citrix.com>
Cc: Razvan Cojocaru <rcojoc...@bitdefender.com>
Cc: Tamas K Lengyel <ta...@tklengyel.com>
---
v4: This commit has been pulled out of the previous commit "altp2m: Introduce
altp2m_switch_vcpu_altp2m_by_id".
---
 xen/arch/x86/mm/altp2m.c | 6 ++
 xen/arch/x86/mm/p2m.c| 6 --
 xen/common/vm_event.c| 1 +
 xen/include/asm-x86/altp2m.h | 3 +++
 xen/include/asm-x86/p2m.h| 3 ---
 5 files changed, 10 insertions(+), 9 deletions(-)

diff --git a/xen/arch/x86/mm/altp2m.c b/xen/arch/x86/mm/altp2m.c
index 930bdc2669..00abb5a5bb 100644
--- a/xen/arch/x86/mm/altp2m.c
+++ b/xen/arch/x86/mm/altp2m.c
@@ -65,6 +65,12 @@ altp2m_vcpu_destroy(struct vcpu *v)
 vcpu_unpause(v);
 }
 
+void altp2m_check(struct vcpu *v, uint16_t idx)
+{
+if ( altp2m_active(v->domain) )
+p2m_switch_vcpu_altp2m_by_id(v, idx);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index d5038ed66b..3feb6315c2 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1687,12 +1687,6 @@ void p2m_mem_paging_resume(struct domain *d, 
vm_event_response_t *rsp)
 }
 }
 
-void altp2m_check(struct vcpu *v, uint16_t idx)
-{
-if ( altp2m_active(v->domain) )
-p2m_switch_vcpu_altp2m_by_id(v, idx);
-}
-
 static struct p2m_domain *
 p2m_getlru_nestedp2m(struct domain *d, struct p2m_domain *p2m)
 {
diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index 42e6f09029..66f1d83d84 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -29,6 +29,7 @@
 #include 
 #include 
 #include 
+#include 
 
 /* for public/io/ring.h macros */
 #define xen_mb()   smp_mb()
diff --git a/xen/include/asm-x86/altp2m.h b/xen/include/asm-x86/altp2m.h
index 64c761873e..67d0205612 100644
--- a/xen/include/asm-x86/altp2m.h
+++ b/xen/include/asm-x86/altp2m.h
@@ -38,4 +38,7 @@ static inline uint16_t altp2m_vcpu_idx(const struct vcpu *v)
 return vcpu_altp2m(v).p2midx;
 }
 
+/* Check to see if vcpu should be switched to a different p2m. */
+void altp2m_check(struct vcpu *v, uint16_t idx);
+
 #endif /* __ASM_X86_ALTP2M_H */
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index d1cc65f86d..863d7559cb 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -803,9 +803,6 @@ static inline struct p2m_domain *p2m_get_altp2m(struct vcpu 
*v)
 /* Switch alternate p2m for a single vcpu */
 bool_t p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx);
 
-/* Check to see if vcpu should be switched to a different p2m. */
-void altp2m_check(struct vcpu *v, uint16_t idx);
-
 /* Flush all the alternate p2m's for a domain */
 void p2m_flush_altp2m(struct domain *d);
 
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 26/39] arm/p2m: Add HVMOP_altp2m_set_mem_access

2017-08-30 Thread Sergej Proskurin
The HVMOP_altp2m_set_mem_access allows to set gfn permissions of
(currently one page at a time) of a specific altp2m view. In case the
view does not hold the requested gfn entry, it will be first copied from
the host's p2m table and then modified as requested.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
Cc: Razvan Cojocaru <rcojoc...@bitdefender.com>
Cc: Tamas K Lengyel <ta...@tklengyel.com>
---
v2: Prevent the page reference count from being falsely updated on
altp2m modification. Therefore, we add a check determining whether
the target p2m is a hostp2m before p2m_put_l3_page is called.

v3: Cosmetic fixes.

Added the functionality to set/get the default_access also in/from
the requested altp2m view.

Read-locked hp2m in "altp2m_set_mem_access".

Moved the functions "p2m_is_(hostp2m|altp2m)" out of this commit.

Moved the funtion "modify_altp2m_entry" out of this commit.

Moved the function "p2m_lookup_attr" out of this commit.

Moved guards for "p2m_put_l3_page" out of this commit.

v4: Cosmetic fixes.

Removed locking altp2m_lock, as it unnecessarily serializes accesses
to "altp2m_set_mem_access".

Use the functions "p2m_(set|get)_entry" instead of the helpers
"p2m_lookup_attr" and "modify_altp2m_entry".

Removes the restriction enforcing changing the memory access of
p2m_ram_(rw|ro). Instead, we allow to set memory permissions of all
pages of the particular altp2m view.

Move the functionality locking ap2m and hp2m out of "altp2m_set_mem_access"
into "p2m_set_mem_access".

Comment the need for the default access in altp2m views.
---
 xen/arch/arm/altp2m.c| 46 
 xen/arch/arm/hvm.c   |  7 -
 xen/arch/arm/mem_access.c| 72 +++-
 xen/include/asm-arm/altp2m.h | 12 
 4 files changed, 122 insertions(+), 15 deletions(-)

diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
index 9a2cf5a018..8c3212780a 100644
--- a/xen/arch/arm/altp2m.c
+++ b/xen/arch/arm/altp2m.c
@@ -77,6 +77,52 @@ int altp2m_switch_domain_altp2m_by_id(struct domain *d, 
unsigned int idx)
 return rc;
 }
 
+int altp2m_set_mem_access(struct domain *d,
+  struct p2m_domain *hp2m,
+  struct p2m_domain *ap2m,
+  p2m_access_t a,
+  gfn_t gfn)
+{
+p2m_type_t p2mt;
+p2m_access_t old_a;
+mfn_t mfn, mfn_sp;
+gfn_t gfn_sp;
+unsigned int order;
+int rc;
+
+/* Check if entry is part of the altp2m view. */
+mfn = p2m_get_entry(ap2m, gfn, , NULL, );
+
+/* Check host p2m if no valid entry in ap2m. */
+if ( mfn_eq(mfn, INVALID_MFN) )
+{
+/* Check if entry is part of the host p2m view. */
+mfn = p2m_get_entry(hp2m, gfn, , _a, );
+if ( mfn_eq(mfn, INVALID_MFN) )
+return -ESRCH;
+
+/* If this is a superpage, copy that first. */
+if ( order != THIRD_ORDER )
+{
+/* Align the gfn and mfn to the given pager order. */
+gfn_sp = _gfn(gfn_x(gfn) & ~((1UL << order) - 1));
+mfn_sp = _mfn(mfn_x(mfn) & ~((1UL << order) - 1));
+
+rc = p2m_set_entry(ap2m, gfn_sp, (1UL << order), mfn_sp, p2mt, 
old_a);
+if ( rc )
+return rc;
+}
+}
+
+/* Align the gfn and mfn to the given pager order. */
+gfn = _gfn(gfn_x(gfn) & ~((1UL << THIRD_ORDER) - 1));
+mfn = _mfn(mfn_x(mfn) & ~((1UL << THIRD_ORDER) - 1));
+
+rc = p2m_set_entry(ap2m, gfn, (1UL << THIRD_ORDER), mfn, p2mt, a);
+
+return rc;
+}
+
 static void altp2m_vcpu_reset(struct vcpu *v)
 {
 v->arch.ap2m_idx = INVALID_ALTP2M;
diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 9bddc7e17e..7e91f2436d 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -139,7 +139,12 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
 break;
 
 case HVMOP_altp2m_set_mem_access:
-rc = -EOPNOTSUPP;
+if ( a.u.set_mem_access.pad )
+rc = -EINVAL;
+else
+rc = p2m_set_mem_access(d, _gfn(a.u.set_mem_access.gfn), 1, 0, 0,
+a.u.set_mem_access.hvmmem_access,
+a.u.set_mem_access.view);
 break;
 
 case HVMOP_altp2m_change_gfn:
diff --git a/xen/arch/arm/mem_access.c b/xen/arch/arm/mem_access.c
index ebc3a86af3..ee2a43fc6e 100644
--- a/xen/arch/arm/mem_access.c
+++ b/xen/arch/arm/mem_access.c
@@ -374,7 +374,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, 
uint32_t nr,
 uint32_t st

[Xen-devel] [PATCH v4 13/39] arm/p2m: Add altp2m table flushing routine

2017-08-30 Thread Sergej Proskurin
The current implementation differentiates between flushing and
destroying altp2m views. This commit adds the function altp2m_flush,
which allows to release all of the alternate p2m views. To make sure
that we flush alternate p2m's only if they are not used by any vCPU, we
introduce a counter that tracks the vCPUs that are currently using the
particular p2m.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v2: Pages in p2m->pages are not cleared in p2m_flush_table anymore.
VMID is freed in p2m_free_one.
Cosmetic fixes.

v3: Changed the locking mechanism to "p2m_write_lock" inside the
function "altp2m_flush".

Do not flush but rather teardown the altp2m in the function
"altp2m_flush".

Exchanged the check "altp2m_vttbr[idx] == INVALID_VTTBR" for
"altp2m_p2m[idx] == NULL" in "altp2m_flush".

v4: Removed the p2m locking instructions in "altp2m_flush", as they are
not needed at this point. At this point, altp2m should be inactive and
thus the particular p2m should not be used by a vCPU. Therefore, we
added an ASSERT statement to ensure this.

We introduce the counter active_vcpus as part of this patch.

Rename the function altp2m_flush to altp2m_flush_complete.
---
 xen/arch/arm/altp2m.c| 32 
 xen/include/asm-arm/altp2m.h |  3 +++
 xen/include/asm-arm/p2m.h|  5 +
 3 files changed, 40 insertions(+)

diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
index e73b69d99d..9c06055a94 100644
--- a/xen/arch/arm/altp2m.c
+++ b/xen/arch/arm/altp2m.c
@@ -28,6 +28,38 @@ int altp2m_init(struct domain *d)
 return 0;
 }
 
+void altp2m_flush_complete(struct domain *d)
+{
+unsigned int i;
+struct p2m_domain *p2m;
+
+/*
+ * If altp2m is active, we are not allowed to flush altp2m[0]. This special
+ * view is considered as the hostp2m as long as altp2m is active.
+ */
+ASSERT(!altp2m_active(d));
+
+altp2m_lock(d);
+
+for ( i = 0; i < MAX_ALTP2M; i++ )
+{
+p2m = d->arch.altp2m_p2m[i];
+
+if ( p2m == NULL )
+continue;
+
+ASSERT(!atomic_read(>active_vcpus));
+
+/* We do not need to lock the p2m, as altp2m is inactive. */
+p2m_teardown_one(p2m);
+
+xfree(p2m);
+d->arch.altp2m_p2m[i] = NULL;
+}
+
+altp2m_unlock(d);
+}
+
 void altp2m_teardown(struct domain *d)
 {
 unsigned int i;
diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index 1706f61f0c..e116cce25f 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -43,4 +43,7 @@ static inline uint16_t altp2m_vcpu_idx(const struct vcpu *v)
 int altp2m_init(struct domain *d);
 void altp2m_teardown(struct domain *d);
 
+/* Flush all the alternate p2m's for a domain. */
+void altp2m_flush_complete(struct domain *d);
+
 #endif /* __ASM_ARM_ALTP2M_H */
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 9bb38e689a..e8a2116081 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -10,6 +10,8 @@
 #include 
 #include 
 
+#include 
+
 #define paddr_bits PADDR_BITS
 
 #define p2m_switch_vttbr_and_get_flags(ovttbr, nvttbr, flags)   \
@@ -132,6 +134,9 @@ struct p2m_domain {
 /* Keeping track on which CPU this p2m was used and for which vCPU */
 uint8_t last_vcpu_ran[NR_CPUS];
 
+/* Alternate p2m: count of vcpu's currently using this p2m. */
+atomic_t active_vcpus;
+
 /* Choose between: host/alternate. */
 p2m_class_t p2m_class;
 };
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 01/39] arm/p2m: Introduce p2m_(switch|restore)_vttbr_and_(g|s)et_flags

2017-08-30 Thread Sergej Proskurin
This commit introduces macros for switching and restoring the vttbr
considering the currently set irq flags. We define these macros, as the
following commits will use the associated functionality multiple times
throughout different files.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v4: Save the content of VTTBR_EL2 inside of the introduced macro
"p2m_switch_vttbr_and_get_flags".

Move the introduced macros into ./xen/include/asm-arm/p2m.h, as they will
be used by different files in the future commits.
---
 xen/arch/arm/p2m.c| 15 ++-
 xen/include/asm-arm/p2m.h | 21 +
 2 files changed, 23 insertions(+), 13 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index c484469e6c..4334e3bc81 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -147,22 +147,11 @@ static void p2m_flush_tlb(struct p2m_domain *p2m)
  * ARM only provides an instruction to flush TLBs for the current
  * VMID. So switch to the VTTBR of a given P2M if different.
  */
-ovttbr = READ_SYSREG64(VTTBR_EL2);
-if ( ovttbr != p2m->vttbr )
-{
-local_irq_save(flags);
-WRITE_SYSREG64(p2m->vttbr, VTTBR_EL2);
-isb();
-}
+p2m_switch_vttbr_and_get_flags(ovttbr, p2m->vttbr, flags);
 
 flush_tlb();
 
-if ( ovttbr != READ_SYSREG64(VTTBR_EL2) )
-{
-WRITE_SYSREG64(ovttbr, VTTBR_EL2);
-isb();
-local_irq_restore(flags);
-}
+p2m_restore_vttbr_and_set_flags(ovttbr, flags);
 }
 
 /*
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index aa0d60ae3a..500dc88fbc 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -12,6 +12,27 @@
 
 #define paddr_bits PADDR_BITS
 
+#define p2m_switch_vttbr_and_get_flags(ovttbr, nvttbr, flags)   \
+({  \
+ ovttbr = READ_SYSREG64(VTTBR_EL2); \
+ if ( ovttbr != nvttbr )\
+ {  \
+local_irq_save(flags);  \
+WRITE_SYSREG64(nvttbr, VTTBR_EL2);  \
+isb();  \
+ }  \
+})
+
+#define p2m_restore_vttbr_and_set_flags(ovttbr, flags)  \
+({  \
+ if ( ovttbr != READ_SYSREG64(VTTBR_EL2) )  \
+ {  \
+WRITE_SYSREG64(ovttbr, VTTBR_EL2);  \
+isb();  \
+local_irq_restore(flags);   \
+ }  \
+})
+
 /* Holds the bit size of IPAs in p2m tables.  */
 extern unsigned int p2m_ipa_bits;
 
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 12/39] arm/p2m: Add altp2m init/teardown routines

2017-08-30 Thread Sergej Proskurin
The p2m initialization now invokes initialization routines responsible
for the allocation and initialization of altp2m structures. The same
applies to teardown routines. The functionality has been adopted from
the x86 altp2m implementation.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v2: Shared code between host/altp2m init/teardown functions.
Added conditional init/teardown of altp2m.
Altp2m related functions are moved to altp2m.c

v3: Removed locking the altp2m_lock in altp2m_teardown. Locking this
lock at this point is unnecessary.

Removed re-setting altp2m_vttbr, altp2m_p2m, and altp2m_active
values in the function "altp2m_teardown". Re-setting these values is
unnecessary as the entire domain will be destroyed right afterwards.

Removed check for "altp2m_enabled" in "p2m_init" as altp2m has not yet
been enabled by libxl at this point.

Removed check for "altp2m_enabled" before tearing down altp2m within
the function "p2m_teardown" so that altp2m gets destroyed even if
the HVM_PARAM_ALTP2M gets reset before "p2m_teardown" is called.

Added initialization of the field d->arch.altp2m_active in
"altp2m_init".

Removed check for already initialized vmid's in "altp2m_init_one",
as "altp2m_init_one" is now called always with an uninitialized p2m.

Removed the array altp2m_vttbr[] in struct arch_domain.

v4: Removed initialization of altp2m_p2m[] to NULL in altp2m_init, as
the "struct arch_domain" is already initialized to zero.

We moved the definition of the macro MAX_ALTP2M to a common place in
a separate commit.
---
 xen/arch/arm/Makefile|  1 +
 xen/arch/arm/altp2m.c| 56 
 xen/arch/arm/p2m.c   | 15 +++-
 xen/include/asm-arm/altp2m.h |  6 +
 xen/include/asm-arm/domain.h | 11 -
 5 files changed, 87 insertions(+), 2 deletions(-)
 create mode 100644 xen/arch/arm/altp2m.c

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 282d2c2949..a08683335d 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -5,6 +5,7 @@ subdir-$(CONFIG_ARM_64) += efi
 subdir-$(CONFIG_ACPI) += acpi
 
 obj-$(CONFIG_HAS_ALTERNATIVE) += alternative.o
+obj-y += altp2m.o
 obj-y += bootfdt.init.o
 obj-y += cpu.o
 obj-y += cpuerrata.o
diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
new file mode 100644
index 00..e73b69d99d
--- /dev/null
+++ b/xen/arch/arm/altp2m.c
@@ -0,0 +1,56 @@
+/*
+ * arch/arm/altp2m.c
+ *
+ * Alternate p2m
+ * Copyright (c) 2016 Sergej Proskurin <prosku...@sec.in.tum.de>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License, version 2,
+ * as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include 
+#include 
+
+int altp2m_init(struct domain *d)
+{
+spin_lock_init(>arch.altp2m_lock);
+d->arch.altp2m_active = false;
+
+return 0;
+}
+
+void altp2m_teardown(struct domain *d)
+{
+unsigned int i;
+struct p2m_domain *p2m;
+
+for ( i = 0; i < MAX_ALTP2M; i++ )
+{
+p2m = d->arch.altp2m_p2m[i];
+
+if ( !p2m )
+continue;
+
+p2m_teardown_one(p2m);
+xfree(p2m);
+}
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index ec855341b9..e017e2972e 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -14,6 +14,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #define MAX_VMID_8_BIT  (1UL << 8)
 #define MAX_VMID_16_BIT (1UL << 16)
@@ -1305,6 +1306,12 @@ static void p2m_teardown_hostp2m(struct domain *d)
 
 void p2m_teardown(struct domain *d)
 {
+/*
+ * Teardown altp2m unconditionally so that altp2m gets always destroyed --
+ * even if HVM_PARAM_ALTP2M gets reset before teardown.
+ */
+altp2m_teardown(d);
+
 p2m_teardown_hostp2m(d);
 }
 
@@ -1319,7 +1326,13 @@ static int p2m_init_hostp2m(struct domain *d)
 
 int p2m_init(struct domain *d)
 {
-return p2m_init_hostp2m(d);
+int rc;
+
+rc = p2m_init_hostp2m(d);
+if ( rc )
+return rc;
+
+return altp2m_init(d);
 }
 
 /*
diff --git a/xen/include/a

[Xen-devel] [PATCH v4 15/39] arm/p2m: Add HVMOP_altp2m_create_p2m

2017-08-30 Thread Sergej Proskurin
Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
Acked-by: Julien Grall <julien.gr...@arm.com>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v2: Cosmetic fixes.

v3: Cosmetic fixes.

Renamed the function "altp2m_init_next" to
"altp2m_init_next_available".

Exchanged the check "altp2m_vttbr[idx] == INVALID_VTTBR" for
"altp2m_p2m[idx] == NULL" in "altp2m_init_next_available".
---
 xen/arch/arm/altp2m.c| 23 +++
 xen/arch/arm/hvm.c   |  3 ++-
 xen/include/asm-arm/altp2m.h |  4 
 3 files changed, 29 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
index 43e95c5681..6b1e34709f 100644
--- a/xen/arch/arm/altp2m.c
+++ b/xen/arch/arm/altp2m.c
@@ -117,6 +117,29 @@ int altp2m_init_by_id(struct domain *d, unsigned int idx)
 return rc;
 }
 
+int altp2m_init_next_available(struct domain *d, uint16_t *idx)
+{
+int rc = -EINVAL;
+uint16_t i;
+
+altp2m_lock(d);
+
+for ( i = 0; i < MAX_ALTP2M; i++ )
+{
+if ( d->arch.altp2m_p2m[i] != NULL )
+continue;
+
+rc = altp2m_init_helper(d, i);
+*idx = i;
+
+break;
+}
+
+altp2m_unlock(d);
+
+return rc;
+}
+
 int altp2m_init(struct domain *d)
 {
 spin_lock_init(>arch.altp2m_lock);
diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index ec8e259797..caa2e1b516 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -126,7 +126,8 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
 break;
 
 case HVMOP_altp2m_create_p2m:
-rc = -EOPNOTSUPP;
+if ( !(rc = altp2m_init_next_available(d, )) )
+rc = __copy_to_guest(arg, , 1) ? -EFAULT : 0;
 break;
 
 case HVMOP_altp2m_destroy_p2m:
diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index 2ef88cec35..b9719f9d5b 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -53,6 +53,10 @@ struct p2m_domain *altp2m_get_altp2m(struct vcpu *v);
 int altp2m_init_by_id(struct domain *d,
   unsigned int idx);
 
+/* Find and initialize the next available alternate p2m. */
+int altp2m_init_next_available(struct domain *d,
+   uint16_t *idx);
+
 /* Flush all the alternate p2m's for a domain. */
 void altp2m_flush_complete(struct domain *d);
 
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 14/39] arm/p2m: Add HVMOP_altp2m_set_domain_state

2017-08-30 Thread Sergej Proskurin
The HVMOP_altp2m_set_domain_state allows to activate altp2m on a
specific domain. This commit adopts the x86
HVMOP_altp2m_set_domain_state implementation.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v2: Dynamically allocate memory for altp2m views only when needed.
Move altp2m related helpers to altp2m.c.
p2m_flush_tlb is made publicly accessible.

v3: Cosmetic fixes.

Removed call to "p2m_alloc_table" in "altp2m_init_helper" as the
entire p2m allocation is now done within the function
"p2m_init_one". The same applies to the call of the function
"p2m_flush_tlb" from "p2m_init_one".

Removed the "altp2m_enabled" check in HVMOP_altp2m_set_domain_state
case as it has been moved in front of the switch statement in
"do_altp2m_op".

Changed the order of setting the new altp2m state (depending on
setting/resetting the state) in HVMOP_altp2m_set_domain_state case.

Removed the call to altp2m_vcpu_reset from altp2m_vcpu_initialize,
as the p2midx is set right after the call to 0, representing the
default view.

Moved the define "vcpu_altp2m" from domain.h to altp2m.h to avoid
defining altp2m-related functionality in multiple files. Also renamed
"vcpu_altp2m" to "altp2m_vcpu".

Declared the function "p2m_flush_tlb" as static, as it is not called
from altp2m.h anymore.

Exported the function "altp2m_get_altp2m" in altp2m.h.

Exchanged the check "altp2m_vttbr[idx] == INVALID_VTTBR" for
"altp2m_p2m[idx] == NULL" in "altp2m_init_by_id".

Set the field p2m->access_required to false by default.

v4: Removed unnecessary initialization in "altp2m_init_helper".

Move the field "active_vcpus" in "struct p2m_domain" out of this
commit.

Set "d->arch.altp2m_active" to the provided new state only once instead of
for each vCPU.

Move the definition of the macro INVALID_ALTP2M to a common place in
a separate commit.

ARM supports an external-only interface to the altp2m subsystem,
i.e., the guest does not have access to the altp2m subsystem. Thus,
we remove the check for the current vcpu in the function
altp2m_vcpu_initialize; there is no scenario in which a guest is
allowed to initialize the altp2m subsystem for itself.

Cosmetic fixes.
---
 xen/arch/arm/altp2m.c| 97 
 xen/arch/arm/hvm.c   | 30 +-
 xen/include/asm-arm/altp2m.h | 10 +
 xen/include/asm-arm/domain.h |  3 ++
 4 files changed, 139 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
index 9c06055a94..43e95c5681 100644
--- a/xen/arch/arm/altp2m.c
+++ b/xen/arch/arm/altp2m.c
@@ -20,6 +20,103 @@
 #include 
 #include 
 
+struct p2m_domain *altp2m_get_altp2m(struct vcpu *v)
+{
+unsigned int idx = v->arch.ap2m_idx;
+
+if ( idx == INVALID_ALTP2M )
+return NULL;
+
+BUG_ON(idx >= MAX_ALTP2M);
+
+return v->domain->arch.altp2m_p2m[idx];
+}
+
+static void altp2m_vcpu_reset(struct vcpu *v)
+{
+v->arch.ap2m_idx = INVALID_ALTP2M;
+}
+
+void altp2m_vcpu_initialize(struct vcpu *v)
+{
+/*
+ * ARM supports an external-only interface to the altp2m subsystem, i.e.,
+ * the guest does not have access to the altp2m subsystem. Thus, we can
+ * simply pause the vcpu, as there is no scenario in which we initialize
+ * altp2m on the current vcpu. That is, the vcpu must be paused every time
+ * we initialize altp2m.
+ */
+vcpu_pause(v);
+
+v->arch.ap2m_idx = 0;
+atomic_inc(_get_altp2m(v)->active_vcpus);
+
+vcpu_unpause(v);
+}
+
+void altp2m_vcpu_destroy(struct vcpu *v)
+{
+struct p2m_domain *p2m;
+
+if ( v != current )
+vcpu_pause(v);
+
+if ( (p2m = altp2m_get_altp2m(v)) )
+atomic_dec(>active_vcpus);
+
+altp2m_vcpu_reset(v);
+
+if ( v != current )
+vcpu_unpause(v);
+}
+
+static int altp2m_init_helper(struct domain *d, unsigned int idx)
+{
+int rc;
+struct p2m_domain *p2m = d->arch.altp2m_p2m[idx];
+
+ASSERT(p2m == NULL);
+
+/* Allocate a new, zeroed altp2m view. */
+p2m = xzalloc(struct p2m_domain);
+if ( p2m == NULL)
+return -ENOMEM;
+
+p2m->p2m_class = p2m_alternate;
+
+/* Initialize the new altp2m view. */
+rc = p2m_init_one(d, p2m);
+if ( rc )
+goto err;
+
+d->arch.altp2m_p2m[idx] = p2m;
+
+return rc;
+
+err:
+xfree(p2m);
+d->arch.altp2m_p2m[idx] = NULL;
+
+return rc;
+}
+
+int altp2m_init_by_id(struct domain *d, unsigned int idx)
+{
+int rc = -EINVAL;
+
+if ( idx >= MAX_ALTP2M )
+   

[Xen-devel] [PATCH v4 02/39] arm/p2m: Add first altp2m HVMOP stubs

2017-08-30 Thread Sergej Proskurin
This commit copies and extends the altp2m-related code from x86 to ARM.
Functions that are no yet supported notify the caller or print a BUG
message stating their absence.

Currently, we prohibit concurrent access of the altp2m interface by
locking the entire domain. As stated in the provided TODO statement,
future implementation should determine, which HVMOPs can be executed
concurrently.

Also, the struct arch_domain is extended with the altp2m_active
attribute, representing the current altp2m activity configuration of the
domain.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v2: Removed altp2m command-line option: Guard through HVM_PARAM_ALTP2M.
Removed not used altp2m helper stubs in altp2m.h.

v3: Cosmetic fixes.

Added domain lock in "do_altp2m_op" to avoid concurrent execution of
altp2m-related HVMOPs.

Added check making sure that HVM_PARAM_ALTP2M is set before
execution of altp2m-related HVMOPs.

v4: Cosmetic fixes.

Added a TODO proposing to determine, which HVMOPs can be executed
concurrently instead of locking the entire domain and hence
prohibiting concurrent access of the altp2m interface.

Adjust to the current code base by explicitly checking whether
altp2m is disabled.

Change the type bool_t to bool of the field altp2m_active in struct
arch_domain.
---
 xen/arch/arm/hvm.c   | 97 
 xen/include/asm-arm/altp2m.h |  4 +-
 xen/include/asm-arm/domain.h |  3 ++
 3 files changed, 102 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index a56b3fe3fb..042bdda979 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -31,6 +31,99 @@
 
 #include 
 
+#include 
+
+static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
+{
+struct xen_hvm_altp2m_op a;
+struct domain *d = NULL;
+uint64_t mode;
+int rc = 0;
+
+if ( copy_from_guest(, arg, 1) )
+return -EFAULT;
+
+if ( a.pad1 || a.pad2 ||
+ (a.version != HVMOP_ALTP2M_INTERFACE_VERSION) ||
+ (a.cmd < HVMOP_altp2m_get_domain_state) ||
+ (a.cmd > HVMOP_altp2m_change_gfn) )
+return -EINVAL;
+
+d = (a.cmd != HVMOP_altp2m_vcpu_enable_notify) ?
+rcu_lock_domain_by_any_id(a.domain) : rcu_lock_current_domain();
+
+if ( d == NULL )
+return -ESRCH;
+
+/*
+ * TODO: We prohibit concurrent access of the altp2m interface by locking
+ * the entire domain. Determine which HVMOPs can be executed concurrently.
+ */
+
+/* Prevent concurrent execution of the following HVMOPs. */
+domain_lock(d);
+
+if ( (a.cmd != HVMOP_altp2m_get_domain_state) &&
+ (a.cmd != HVMOP_altp2m_set_domain_state) &&
+ !altp2m_active(d) )
+{
+rc = -EOPNOTSUPP;
+goto out;
+}
+
+mode = d->arch.hvm_domain.params[HVM_PARAM_ALTP2M];
+
+if ( XEN_ALTP2M_disabled == mode )
+{
+rc = -EINVAL;
+goto out;
+}
+
+if ( (rc = xsm_hvm_altp2mhvm_op(XSM_OTHER, d, mode, a.cmd)) )
+goto out;
+
+switch ( a.cmd )
+{
+case HVMOP_altp2m_get_domain_state:
+rc = -EOPNOTSUPP;
+break;
+
+case HVMOP_altp2m_set_domain_state:
+rc = -EOPNOTSUPP;
+break;
+
+case HVMOP_altp2m_vcpu_enable_notify:
+rc = -EOPNOTSUPP;
+break;
+
+case HVMOP_altp2m_create_p2m:
+rc = -EOPNOTSUPP;
+break;
+
+case HVMOP_altp2m_destroy_p2m:
+rc = -EOPNOTSUPP;
+break;
+
+case HVMOP_altp2m_switch_p2m:
+rc = -EOPNOTSUPP;
+break;
+
+case HVMOP_altp2m_set_mem_access:
+rc = -EOPNOTSUPP;
+break;
+
+case HVMOP_altp2m_change_gfn:
+rc = -EOPNOTSUPP;
+break;
+}
+
+out:
+domain_unlock(d);
+rcu_unlock_domain(d);
+
+return rc;
+}
+
 long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
 long rc = 0;
@@ -79,6 +172,10 @@ long do_hvm_op(unsigned long op, 
XEN_GUEST_HANDLE_PARAM(void) arg)
 rc = -EINVAL;
 break;
 
+case HVMOP_altp2m:
+rc = do_altp2m_op(arg);
+break;
+
 default:
 {
 gdprintk(XENLOG_DEBUG, "HVMOP op=%lu: not implemented\n", op);
diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index a87747a291..0711796123 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -2,6 +2,7 @@
  * Alternate p2m
  *
  * Copyright (c) 2014, Intel Corporation.
+ * Copyright (c) 2016, Sergej Proskurin <prosku...@sec.in.tum.de>.
  *
  * This program is free software; you can redistribute it and/or modify it
  * under the terms and conditions of the GNU General Public License,
@@ -24,8 +25,7 @@
 /* Alternate p2m on/off per domain */
 static inline bool_t altp2m_active(const 

[Xen-devel] [PATCH v4 18/39] arm/p2m: Add p2m_get_active_p2m macro

2017-08-30 Thread Sergej Proskurin
This commit introduces the macro "p2m_get_active_p2m" returning the
currently active (alt)p2m. The need for this macro will be shown in the
following commits.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
Acked-by: Julien Grall <julien.gr...@arm.com>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v4: Moved the introduced macro from ./xen/arch/arm/p2m.c to
./xen/include/asm-arm/p2m.h as it will be used in multiple files in the
following commits.
---
 xen/include/asm-arm/p2m.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index e8a2116081..d3467daacf 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -14,6 +14,9 @@
 
 #define paddr_bits PADDR_BITS
 
+#define p2m_get_active_p2m(v) unlikely(altp2m_active(v->domain)) ?  \
+  altp2m_get_altp2m(v) : 
p2m_get_hostp2m(v->domain);
+
 #define p2m_switch_vttbr_and_get_flags(ovttbr, nvttbr, flags)   \
 ({  \
  ovttbr = READ_SYSREG64(VTTBR_EL2); \
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 21/39] arm/p2m: Cosmetic fix - __p2m_get_mem_access

2017-08-30 Thread Sergej Proskurin
In this commit, we extend the function prototype of "__p2m_get_mem_access" to
hold an argument of type "struct p2m_domain*", as we need to distinguish
between the host's p2m and different altp2m views.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v3: Changed the parameter of "p2m_mem_access_check_and_get_page"
from "struct p2m_domain*" to "struct vcpu*".

v4: We don't need to adjust the function "p2m_mem_access_check_and_get_page"
any more, as its change is already part of another patch.
---
 xen/arch/arm/mem_access.c | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/mem_access.c b/xen/arch/arm/mem_access.c
index 3e2bb4088a..5bc28db8ff 100644
--- a/xen/arch/arm/mem_access.c
+++ b/xen/arch/arm/mem_access.c
@@ -24,10 +24,9 @@
 #include 
 #include 
 
-static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
+static int __p2m_get_mem_access(struct p2m_domain *p2m, gfn_t gfn,
 xenmem_access_t *access)
 {
-struct p2m_domain *p2m = p2m_get_hostp2m(d);
 void *i;
 unsigned int index;
 
@@ -148,7 +147,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned 
long flag,
  * We do this first as this is faster in the default case when no
  * permission is set on the page.
  */
-rc = __p2m_get_mem_access(v->domain, gfn, );
+rc = __p2m_get_mem_access(p2m, gfn, );
 if ( rc < 0 )
 goto err;
 
@@ -443,7 +442,7 @@ int p2m_get_mem_access(struct domain *d, gfn_t gfn,
 struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
 p2m_read_lock(p2m);
-ret = __p2m_get_mem_access(d, gfn, access);
+ret = __p2m_get_mem_access(p2m, gfn, access);
 p2m_read_unlock(p2m);
 
 return ret;
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 38/39] arm/xen-access: Extend xen-access for altp2m on ARM

2017-08-30 Thread Sergej Proskurin
Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
Acked-by: Razvan Cojocaru <rcojoc...@bitdefender.com>
---
Cc: Razvan Cojocaru <rcojoc...@bitdefender.com>
Cc: Tamas K Lengyel <ta...@tklengyel.com>
Cc: Ian Jackson <ian.jack...@eu.citrix.com>
Cc: Wei Liu <wei.l...@citrix.com>
---
 tools/tests/xen-access/xen-access.c | 33 -
 1 file changed, 20 insertions(+), 13 deletions(-)

diff --git a/tools/tests/xen-access/xen-access.c 
b/tools/tests/xen-access/xen-access.c
index 1e69e25a16..481337cacd 100644
--- a/tools/tests/xen-access/xen-access.c
+++ b/tools/tests/xen-access/xen-access.c
@@ -362,10 +362,11 @@ void usage(char* progname)
 {
 fprintf(stderr, "Usage: %s [-m]  write|exec", progname);
 #if defined(__i386__) || defined(__x86_64__)
-fprintf(stderr, 
"|breakpoint|altp2m_write|altp2m_exec|debug|cpuid|desc_access|write_ctrlreg_cr4");
+fprintf(stderr, 
"|breakpoint|debug|cpuid|desc_access|write_ctrlreg_cr4");
 #elif defined(__arm__) || defined(__aarch64__)
 fprintf(stderr, "|privcall");
 #endif
+fprintf(stderr, "|altp2m_write|altp2m_exec");
 fprintf(stderr,
 "\n"
 "Logs first page writes, execs, or breakpoint traps that occur on 
the domain.\n"
@@ -441,18 +442,6 @@ int main(int argc, char *argv[])
 {
 breakpoint = 1;
 }
-else if ( !strcmp(argv[0], "altp2m_write") )
-{
-default_access = XENMEM_access_rx;
-altp2m = 1;
-memaccess = 1;
-}
-else if ( !strcmp(argv[0], "altp2m_exec") )
-{
-default_access = XENMEM_access_rw;
-altp2m = 1;
-memaccess = 1;
-}
 else if ( !strcmp(argv[0], "debug") )
 {
 debug = 1;
@@ -475,6 +464,18 @@ int main(int argc, char *argv[])
 privcall = 1;
 }
 #endif
+else if ( !strcmp(argv[0], "altp2m_write") )
+{
+default_access = XENMEM_access_rx;
+altp2m = 1;
+memaccess = 1;
+}
+else if ( !strcmp(argv[0], "altp2m_exec") )
+{
+default_access = XENMEM_access_rw;
+altp2m = 1;
+memaccess = 1;
+}
 else
 {
 usage(argv[0]);
@@ -547,12 +548,14 @@ int main(int argc, char *argv[])
 goto exit;
 }
 
+#if defined(__i386__) || defined(__x86_64__)
 rc = xc_monitor_singlestep( xch, domain_id, 1 );
 if ( rc < 0 )
 {
 ERROR("Error %d failed to enable singlestep monitoring!\n", rc);
 goto exit;
 }
+#endif
 }
 
 if ( memaccess && !altp2m )
@@ -663,7 +666,9 @@ int main(int argc, char *argv[])
 rc = xc_altp2m_switch_to_view( xch, domain_id, 0 );
 rc = xc_altp2m_destroy_view(xch, domain_id, altp2m_view_id);
 rc = xc_altp2m_set_domain_state(xch, domain_id, 0);
+#if defined(__i386__) || defined(__x86_64__)
 rc = xc_monitor_singlestep(xch, domain_id, 0);
+#endif
 } else {
 rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, 
~0ull, 0);
 rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, 
START_PFN,
@@ -883,9 +888,11 @@ int main(int argc, char *argv[])
 exit:
 if ( altp2m )
 {
+#if defined(__i386__) || defined(__x86_64__)
 uint32_t vcpu_id;
 for ( vcpu_id = 0; vcpu_id<XEN_LEGACY_MAX_VCPUS; vcpu_id++)
 rc = control_singlestep(xch, domain_id, vcpu_id, 0);
+#endif
 }
 
 /* Tear down domain xenaccess */
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 09/39] arm/p2m: Rename parameter in p2m_alloc_vmid

2017-08-30 Thread Sergej Proskurin
This commit does not change or introduce any additional functionality
but rather is a part of the following commit that alters the
functionality of the function "p2m_alloc_vmid".

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
 xen/arch/arm/p2m.c | 12 ++--
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 65dd2772bf..808d99e1e9 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1166,24 +1166,24 @@ static int p2m_alloc_vmid(struct domain *d)
 {
 struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
-int rc, nr;
+int rc, vmid;
 
 spin_lock(_alloc_lock);
 
-nr = find_first_zero_bit(vmid_mask, MAX_VMID);
+vmid = find_first_zero_bit(vmid_mask, MAX_VMID);
 
-ASSERT(nr != INVALID_VMID);
+ASSERT(vmid != INVALID_VMID);
 
-if ( nr == MAX_VMID )
+if ( vmid == MAX_VMID )
 {
 rc = -EBUSY;
 printk(XENLOG_ERR "p2m.c: dom%d: VMID pool exhausted\n", d->domain_id);
 goto out;
 }
 
-set_bit(nr, vmid_mask);
+set_bit(vmid, vmid_mask);
 
-p2m->vmid = nr;
+p2m->vmid = vmid;
 
 rc = 0;
 
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 34/39] arm/p2m: Add HVMOP_altp2m_change_gfn

2017-08-30 Thread Sergej Proskurin
This commit adds the functionality to change mfn mappings for specified
gfn's in altp2m views. This mechanism can be used within the context of
VMI, e.g., to establish stealthy debugging.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v3: Moved the altp2m_lock to guard access to d->arch.altp2m_vttbr[idx]
in altp2m_change_gfn.

Locked hp2m to prevent hp2m entries from being modified while the
function "altp2m_change_gfn" is active.

Removed setting ap2m->mem_access_enabled in "altp2m_change_gfn", as
we do not need explicitly splitting pages at this point.

Extended checks allowing to change gfn's in p2m_ram_(rw|ro) memory
only.

Moved the funtion "remove_altp2m_entry" out of this commit.

v4: Cosmetic fixes.

Moved the initialization of the ap2m pointer after having checked
that the altp2m index and the associated altp2m view are valid.

Use the functions "p2m_(set|get)_entry" instead of the helpers
"p2m_lookup_attr", "remove_altp2m_entry", and "modify_altp2m_entry".

Removed the call to altp2m_lock in "altp2m_change_gfn" as it is
sufficient to read lock the host's p2m and write lock the indexed
altp2m.

We make sure that we do not remove a superpage by mistake if the
user requests a specific gfn.

Removed memaccess-related comment as (i) memaccess is handled by
"p2m_set_entry" and (ii) we map always only one page and
"p2m_set_entry" can handle splitting superpages if required.
---
 xen/arch/arm/altp2m.c| 81 
 xen/arch/arm/hvm.c   |  7 +++-
 xen/include/asm-arm/altp2m.h |  6 
 3 files changed, 93 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
index fd455bdbfc..37820e7b2a 100644
--- a/xen/arch/arm/altp2m.c
+++ b/xen/arch/arm/altp2m.c
@@ -305,6 +305,87 @@ out:
 return rc;
 }
 
+int altp2m_change_gfn(struct domain *d,
+  unsigned int idx,
+  gfn_t old_gfn,
+  gfn_t new_gfn)
+{
+struct p2m_domain *hp2m, *ap2m;
+mfn_t mfn;
+p2m_access_t p2ma;
+p2m_type_t p2mt;
+unsigned int page_order;
+int rc = -EINVAL;
+
+hp2m = p2m_get_hostp2m(d);
+
+if ( idx >= MAX_ALTP2M || d->arch.altp2m_p2m[idx] == NULL )
+return rc;
+
+ap2m = d->arch.altp2m_p2m[idx];
+
+p2m_read_lock(hp2m);
+p2m_write_lock(ap2m);
+
+mfn = p2m_get_entry(ap2m, old_gfn, , NULL, NULL);
+
+/* Check whether the page needs to be reset. */
+if ( gfn_eq(new_gfn, INVALID_GFN) )
+{
+/* If mfn is mapped by old_gfn, remove old_gfn from the altp2m table. 
*/
+if ( !mfn_eq(mfn, INVALID_MFN) )
+rc = p2m_set_entry(ap2m, old_gfn, (1UL << THIRD_ORDER), 
INVALID_MFN,
+   p2m_invalid, p2m_access_rwx);
+
+goto out;
+}
+
+/* Check hostp2m if no valid entry in altp2m present. */
+if ( mfn_eq(mfn, INVALID_MFN) )
+{
+mfn = p2m_get_entry(hp2m, old_gfn, , , _order);
+
+if ( mfn_eq(mfn, INVALID_MFN) ||
+ /* Allow changing gfn's in p2m_ram_(rw|ro) memory only. */
+ ((p2mt != p2m_ram_rw) && (p2mt != p2m_ram_ro)) )
+goto out;
+
+/* If this is a superpage, copy that first. */
+if ( page_order != THIRD_ORDER )
+{
+/* Align the old_gfn and mfn to the given pager order. */
+old_gfn = _gfn(gfn_x(old_gfn) & ~((1UL << page_order) - 1));
+mfn = _mfn(mfn_x(mfn) & ~((1UL << page_order) - 1));
+
+if ( p2m_set_entry(ap2m, old_gfn, (1UL << page_order), mfn, p2mt, 
p2ma) )
+goto out;
+}
+}
+
+mfn = p2m_get_entry(ap2m, new_gfn, , , NULL);
+
+/* If new_gfn is not part of altp2m, get the mapping information from hp2m 
*/
+if ( mfn_eq(mfn, INVALID_MFN) )
+mfn = p2m_get_entry(hp2m, new_gfn, , , NULL);
+
+if ( mfn_eq(mfn, INVALID_MFN) ||
+ /* Allow changing gfn's in p2m_ram_(rw|ro) memory only. */
+ ((p2mt != p2m_ram_rw) && (p2mt != p2m_ram_ro)) )
+goto out;
+
+if ( p2m_set_entry(ap2m, old_gfn, (1UL << THIRD_ORDER), mfn, p2mt, p2ma) )
+goto out;
+
+rc = 0;
+
+out:
+p2m_write_unlock(ap2m);
+p2m_read_unlock(hp2m);
+
+return rc;
+}
+
+
 static void altp2m_vcpu_reset(struct vcpu *v)
 {
 v->arch.ap2m_idx = INVALID_ALTP2M;
diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 7e91f2436d..8cf6db24a6 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -148,7 +148,12 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
 break;
 
 case HVMOP_altp2m_change_

[Xen-devel] [PATCH v4 24/39] arm/p2m: Make p2m_put_l3_page ready for altp2m

2017-08-30 Thread Sergej Proskurin
This commit extends the prototype of the function "p2m_put_l3_page" by
an additional function parameter of type "struct p2m_domain*". This is
needed as a future commit will extend the function "p2m_put_l3_page" so
that we can call "put_page" only if the p2m being modified is the
hostp2m.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
 xen/arch/arm/p2m.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index c5bf64aee0..246250d8c6 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -606,7 +606,7 @@ static int p2m_mem_access_radix_set(struct p2m_domain *p2m, 
gfn_t gfn,
  * TODO: Handle superpages, for now we only take special references for leaf
  * pages (specifically foreign ones, which can't be super mapped today).
  */
-static void p2m_put_l3_page(const lpae_t pte)
+static void p2m_put_l3_page(struct p2m_domain *p2m, const lpae_t pte)
 {
 ASSERT(lpae_valid(pte));
 
@@ -649,7 +649,7 @@ static void p2m_free_entry(struct p2m_domain *p2m,
 if ( level == 3 )
 {
 p2m->stats.mappings[level]--;
-p2m_put_l3_page(entry);
+p2m_put_l3_page(p2m, entry);
 return;
 }
 
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 37/39] altp2m: Allow activating altp2m on ARM domains

2017-08-30 Thread Sergej Proskurin
The previous libxl implemention limited the use of altp2m to x86 HVM domains.
This commit extends libxl by introducing the altp2m switch to ARM domains.

Additionally, we introduce the macro LIBXL_HAVE_ARM_ALTP2M in parallel to the
former LIBXL_HAVE_ALTP2M to differentiate between altp2m for x86 and and altp2m
for ARM architectures. We also extend the documentation of the option "altp2m"
in ./docs/man/xl.cfg.pod.5.in.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Ian Jackson <ian.jack...@eu.citrix.com>
Cc: Wei Liu <wei.l...@citrix.com>
---
 tools/libxl/libxl.h | 10 +-
 tools/libxl/libxl_dom.c | 16 ++--
 tools/libxl/libxl_types.idl |  2 +-
 3 files changed, 24 insertions(+), 4 deletions(-)

diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 17045253ab..e7af15bc45 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -872,11 +872,19 @@ typedef struct libxl__ctx libxl_ctx;
 
 /*
  * LIBXL_HAVE_ALTP2M
- * If this is defined, then libxl supports alternate p2m functionality.
+ * If this is defined, then libxl supports alternate p2m functionality for
+ * x86 HVM guests.
  */
 #define LIBXL_HAVE_ALTP2M 1
 
 /*
+ * LIBXL_HAVE_ARM_ALTP2M
+ * If this is defined, then libxl supports alternate p2m functionality for
+ * ARM guests.
+ */
+#define LIBXL_HAVE_ARM_ALTP2M 1
+
+/*
  * LIBXL_HAVE_REMUS
  * If this is defined, then libxl supports remus.
  */
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index f54fd49a73..db77c95a7e 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -314,6 +314,7 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
 libxl_domain_build_info *const info = _config->b_info;
 libxl_ctx *ctx = libxl__gc_owner(gc);
 char *xs_domid, *con_domid;
+bool altp2m_support = false;
 int rc;
 uint64_t size;
 
@@ -458,18 +459,29 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
 #endif
 }
 
+#if defined(__i386__) || defined(__x86_64__)
 /* Alternate p2m support on x86 is available only for HVM guests. */
-if (info->type == LIBXL_DOMAIN_TYPE_HVM) {
+if (info->type == LIBXL_DOMAIN_TYPE_HVM)
+altp2m_support = true;
+#elif defined(__arm__) || defined(__aarch64__)
+/* Alternate p2m support on ARM is available for all guests. */
+altp2m_support = true;
+#endif
+
+if (altp2m_support) {
 /* The config parameter "altp2m" replaces the parameter "altp2mhvm". 
For
- * legacy reasons, both parameters are accepted on x86 HVM guests.
+ * legacy reasons, both parameters are accepted on x86 HVM guests (only
+ * "altp2m" is accepted on ARM guests).
  *
  * If the legacy field info->u.hvm.altp2m is set, activate altp2m.
  * Otherwise set altp2m based on the field info->altp2m. */
+#if defined(__i386__) || defined(__x86_64__)
 if (info->altp2m == LIBXL_ALTP2M_MODE_DISABLED &&
 libxl_defbool_val(info->u.hvm.altp2m))
 xc_hvm_param_set(ctx->xch, domid, HVM_PARAM_ALTP2M,
  libxl_defbool_val(info->u.hvm.altp2m));
 else
+#endif
 xc_hvm_param_set(ctx->xch, domid, HVM_PARAM_ALTP2M,
  info->altp2m);
 }
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 6e80d36256..412a0b6129 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -583,7 +583,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
 ("arch_arm", Struct(None, [("gic_version", libxl_gic_version),
   ])),
 # Alternate p2m is not bound to any architecture or guest type, as it is
-# supported by x86 HVM and ARM support is planned.
+# supported by x86 HVM and ARM domains.
 ("altp2m", libxl_altp2m_mode),
 
 ], dir=DIR_IN
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 36/39] altp2m: Document external-only use on ARM

2017-08-30 Thread Sergej Proskurin
From: Tamas K Lengyel <tamas.leng...@zentific.com>

Currently, the altp2m feature has been used and thus documented for the
x86 architecture. As we aim to introduce altp2m to ARM, in this commit,
we adjust the documentation by pointing out x86 only parts and thus make
clear that the modes XEN_ALTP2M_external and XEN_ALTP2M_disabled are
also valid for the ARM architecture.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
Signed-off-by: Tamas K Lengyel <tamas.leng...@zentific.com>
---
Cc: Ian Jackson <ian.jack...@eu.citrix.com>
Cc: Wei Liu <wei.l...@citrix.com>
---
v4: We added this patch to our patch series.
---
 docs/man/xl.cfg.pod.5.in | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/docs/man/xl.cfg.pod.5.in b/docs/man/xl.cfg.pod.5.in
index 79cb2eaea7..259cf18ea6 100644
--- a/docs/man/xl.cfg.pod.5.in
+++ b/docs/man/xl.cfg.pod.5.in
@@ -1380,7 +1380,7 @@ guest Operating Systems.
 
 =item B

[Xen-devel] [PATCH v4 03/39] arm/p2m: Add hvm_allow_(set|get)_param

2017-08-30 Thread Sergej Proskurin
This commit introduces the functions hvm_allow_(set|get)_param. These
can be used as a filter controlling access to HVM params. This
functionality has been inspired by the x86 implementation.

The introduced filter ensures that the HVM param HVM_PARAM_ALTP2M is set
once and not altered by guest domains.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
 xen/arch/arm/hvm.c | 65 ++
 1 file changed, 56 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 042bdda979..6f5f9b41ac 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -124,6 +124,48 @@ out:
 return rc;
 }
 
+static int hvm_allow_set_param(struct domain *d, const struct xen_hvm_param *a)
+{
+uint64_t value = d->arch.hvm_domain.params[a->index];
+int rc;
+
+rc = xsm_hvm_param(XSM_TARGET, d, HVMOP_set_param);
+if ( rc )
+return rc;
+
+switch ( a->index )
+{
+/* The following parameters should only be changed once. */
+case HVM_PARAM_ALTP2M:
+if ( value != 0 && a->value != value )
+rc = -EEXIST;
+break;
+default:
+break;
+}
+
+return rc;
+}
+
+static int hvm_allow_get_param(struct domain *d, const struct xen_hvm_param *a)
+{
+int rc;
+
+rc = xsm_hvm_param(XSM_TARGET, d, HVMOP_get_param);
+if ( rc )
+return rc;
+
+switch ( a->index )
+{
+/* This switch statement can be used to control/limit guest access to
+ * certain HVM params. */
+default:
+break;
+}
+
+return rc;
+}
+
 long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
 long rc = 0;
@@ -146,21 +188,26 @@ long do_hvm_op(unsigned long op, 
XEN_GUEST_HANDLE_PARAM(void) arg)
 if ( d == NULL )
 return -ESRCH;
 
-rc = xsm_hvm_param(XSM_TARGET, d, op);
-if ( rc )
-goto param_fail;
-
-if ( op == HVMOP_set_param )
+switch ( op )
 {
+case HVMOP_set_param:
+rc = hvm_allow_set_param(d, );
+if ( rc )
+break;
+
 d->arch.hvm_domain.params[a.index] = a.value;
-}
-else
-{
+break;
+
+case HVMOP_get_param:
+rc = hvm_allow_get_param(d, );
+if ( rc )
+break;
+
 a.value = d->arch.hvm_domain.params[a.index];
 rc = copy_to_guest(arg, , 1) ? -EFAULT : 0;
+break;
 }
 
-param_fail:
 rcu_unlock_domain(d);
 break;
 }
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 05/39] arm/p2m: Introduce p2m_is_(hostp2m|altp2m)

2017-08-30 Thread Sergej Proskurin
This commit adds a p2m class to the struct p2m_domain to distinguish
between the host's original p2m and alternate p2m's. The need for this
functionality will be shown in the following commits.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v4: Change return type of p2m_is_(hostp2m|altp2m) from bool_t to bool.
---
 xen/include/asm-arm/p2m.h | 18 ++
 1 file changed, 18 insertions(+)

diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 500dc88fbc..332d74f11c 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -40,6 +40,11 @@ struct domain;
 
 extern void memory_type_changed(struct domain *);
 
+typedef enum {
+p2m_host,
+p2m_alternate,
+} p2m_class_t;
+
 /* Per-p2m-table state */
 struct p2m_domain {
 /*
@@ -126,6 +131,9 @@ struct p2m_domain {
 
 /* Keeping track on which CPU this p2m was used and for which vCPU */
 uint8_t last_vcpu_ran[NR_CPUS];
+
+/* Choose between: host/alternate. */
+p2m_class_t p2m_class;
 };
 
 /*
@@ -359,6 +367,16 @@ static inline int get_page_and_type(struct page_info *page,
 /* get host p2m table */
 #define p2m_get_hostp2m(d) (&(d)->arch.p2m)
 
+static inline bool p2m_is_hostp2m(const struct p2m_domain *p2m)
+{
+return p2m->p2m_class == p2m_host;
+}
+
+static inline bool p2m_is_altp2m(const struct p2m_domain *p2m)
+{
+return p2m->p2m_class == p2m_alternate;
+}
+
 static inline bool_t p2m_vm_event_sanity_check(struct domain *d)
 {
 return 1;
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 00/39] arm/altp2m: Introducing altp2m to ARM

2017-08-30 Thread Sergej Proskurin
Hi all,

The following patch series can be found on Github[0] and is part of my
contribution to last year's Google Summer of Code (GSoC)[1]. My project is
managed by the organization The Honeynet Project. As part of GSoC, I was being
supervised by the Xen maintainer Tamas K. Lengyel <ta...@tklengyel.com>, George
D. Webster, and Steven Maresca.

In this patch series, we provide an implementation of the altp2m subsystem for
ARM. Our implementation is based on the altp2m subsystem for x86, providing
additional --alternate-- views on the guest's physical memory by means of the
ARM 2nd stage translation mechanism. The patches introduce new HVMOPs and
extend the p2m subsystem. Also, we extend libxl to support altp2m on ARM and
modify xen-access to test the suggested functionality.

To be more precise, altp2m allows to create and switch to additional p2m views
(i.e. gfn to mfn mappings). These views can be manipulated and activated as
will through the provided HVMOPs. In this way, the active guest instance in
question can seamlessly proceed execution without noticing that anything has
changed. The prime scope of application of altp2m is Virtual Machine
Introspection, where guest systems are analyzed from the outside of the VM.

Altp2m can be activated by means of the guest control parameter "altp2m" on x86
and ARM architectures. For use-cases requiring purely external access to
altp2m, this patch allows to specify if the altp2m interface should be external
only.

This version is a revised version of v3 that has been submitted in 2016. It
incorporates the comments of the previous patch series. Although the previous
version has been submitted last year, I have kept the comments of the
individual patches. Both the purpose and changes from v3 to v4 are stated
inside the individual commits.

Best regards,
~Sergej

[0] https://github.com/sergej-proskurin/xen (branch arm-altp2m-v4)
[1] https://summerofcode.withgoogle.com/projects/#4970052843470848

Sergej Proskurin (38):
  arm/p2m: Introduce p2m_(switch|restore)_vttbr_and_(g|s)et_flags
  arm/p2m: Add first altp2m HVMOP stubs
  arm/p2m: Add hvm_allow_(set|get)_param
  arm/p2m: Add HVMOP_altp2m_get_domain_state
  arm/p2m: Introduce p2m_is_(hostp2m|altp2m)
  arm/p2m: Cosmetic fix - substitute _gfn(ULONG_MAX) for INVALID_GFN
  arm/p2m: Move hostp2m init/teardown to individual functions
  arm/p2m: Cosmetic fix - function prototype of p2m_alloc_table
  arm/p2m: Rename parameter in p2m_alloc_vmid
  arm/p2m: Change func prototype and impl of p2m_(alloc|free)_vmid
  altp2m: Move (MAX|INVALID)_ALTP2M to xen/p2m-common.h
  arm/p2m: Add altp2m init/teardown routines
  arm/p2m: Add altp2m table flushing routine
  arm/p2m: Add HVMOP_altp2m_set_domain_state
  arm/p2m: Add HVMOP_altp2m_create_p2m
  arm/p2m: Add HVMOP_altp2m_destroy_p2m
  arm/p2m: Add HVMOP_altp2m_switch_p2m
  arm/p2m: Add p2m_get_active_p2m macro
  arm/p2m: Make p2m_restore_state ready for altp2m
  arm/p2m: Make get_page_from_gva ready for altp2m
  arm/p2m: Cosmetic fix - __p2m_get_mem_access
  arm/p2m: Make p2m_mem_access_check ready for altp2m
  arm/p2m: Cosmetic fix - function prototypes
  arm/p2m: Make p2m_put_l3_page ready for altp2m
  arm/p2m: Modify reference count only if hostp2m active
  arm/p2m: Add HVMOP_altp2m_set_mem_access
  arm/p2m: Add altp2m_propagate_change
  altp2m: Rename p2m_altp2m_check to altp2m_check
  x86/altp2m: Move altp2m_check to altp2m.c
  arm/altp2m: Move altp2m_check to altp2m.h
  arm/altp2m: Introduce altp2m_switch_vcpu_altp2m_by_id
  arm/altp2m: Make altp2m_vcpu_idx ready for altp2m
  arm/p2m: Add altp2m paging mechanism
  arm/p2m: Add HVMOP_altp2m_change_gfn
  arm/p2m: Adjust debug information to altp2m
  altp2m: Allow activating altp2m on ARM domains
  arm/xen-access: Extend xen-access for altp2m on ARM
  arm/xen-access: Add test of xc_altp2m_change_gfn

Tamas K Lengyel (1):
  altp2m: Document external-only use on ARM

 docs/man/xl.cfg.pod.5.in|   8 +-
 tools/libxl/libxl.h |  10 +-
 tools/libxl/libxl_dom.c |  16 +-
 tools/libxl/libxl_types.idl |   2 +-
 tools/tests/xen-access/Makefile |   2 +-
 tools/tests/xen-access/xen-access.c | 213 -
 xen/arch/arm/Makefile   |   1 +
 xen/arch/arm/altp2m.c   | 601 
 xen/arch/arm/hvm.c  | 202 +++-
 xen/arch/arm/mem_access.c   | 112 +--
 xen/arch/arm/p2m.c  | 219 +
 xen/arch/arm/traps.c|  17 +
 xen/arch/x86/mm/altp2m.c|   6 +
 xen/arch/x86/mm/p2m.c   |   6 -
 xen/common/vm_event.c   |   3 +-
 xen/include/asm-arm/altp2m.h|  73 -
 xen/include/asm-arm/domain.h|  15 +
 xen/include/asm-arm/p2m.h   |  62 +++-
 xen/include/asm-x86/altp2m.h|   3 +
 xen/include/asm-x86/domain.h|   3 +-
 xen/include/asm-x86/p2m.h   |   3 -
 xen/include/xen/altp2m-common.h

[Xen-devel] [PATCH v4 33/39] arm/p2m: Add altp2m paging mechanism

2017-08-30 Thread Sergej Proskurin
This commit adds the function "altp2m_lazy_copy" implementing the altp2m
paging mechanism. The function "altp2m_lazy_copy" lazily copies the
hostp2m's mapping into the currently active altp2m view on 2nd stage
translation faults on instruction or data access.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v3: Cosmetic fixes.

Locked hostp2m in the function "altp2m_lazy_copy" to avoid a mapping
being changed in hostp2m before it has been inserted into the
valtp2m view.

Removed unnecessary calls to "p2m_mem_access_check" in the functions
"do_trap_instr_abort_guest" and "do_trap_data_abort_guest" after a
translation fault has been handled by the function
"altp2m_lazy_copy".

Adapted "altp2m_lazy_copy" to return the value "true" if the
encountered translation fault encounters a valid entry inside of the
currently active altp2m view. If multiple vcpus are using the same
altp2m, it is likely that both generate a translation fault, whereas
the first one will be already handled by "altp2m_lazy_copy". With
this change the 2nd vcpu will retry accessing the faulting address.

Changed order of altp2m checking and MMIO emulation within the
function "do_trap_data_abort_guest".  Now, altp2m is checked and
handled only if the MMIO does not have to be emulated.

Changed the function prototype of "altp2m_lazy_copy".  This commit
removes the unnecessary struct p2m_domain* from the previous
function prototype.  Also, this commit removes the unnecessary
argument gva.  Finally, this commit changes the address of the
function parameter gpa from paddr_t to gfn_t and renames it to gfn.

Moved the altp2m handling mechanism into a separate function
"try_handle_altp2m".

Moved the functions "p2m_altp2m_check" and
"altp2m_switch_vcpu_altp2m_by_id" out of this patch.

Moved applied code movement into a separate patch.

v4: Cosmetic fixes.

Changed the function prototype of "altp2m_lazy_copy" and
"try_handle_altp2m" by removing the unused function parameter of
type "struct npfec".

Removed the function "try_handle_altp2m".

Please note that we cannot reorder the calls to "altp2m_lazy_copy"
and "gfn_to_mfn" as to deprioritize altp2m. If the call to
"gfn_to_mfn" would be performed before "altp2m_lazy_copy", the
system would likely stall if altp2m was active. This is because the
"p2m_lookup" routine in "gfn_to_mfn" considers only the host's p2m,
which will most likely return a mfn != INVALID_MFN and thus entirely
skip the call to "altp2m_lazy_copy".

Use the functions "p2m_(set|get)_entry" instead of the helpers
"p2m_lookup_attr" and "modify_altp2m_entry" in the function
"altp2m_lazy_copy". Therefore, we write-lock the altp2m view
throughout the entire function.

Moved read-locking of hp2m to the beginning of the function
"altp2m_lazy_copy".
---
 xen/arch/arm/altp2m.c| 66 
 xen/arch/arm/traps.c | 17 
 xen/include/asm-arm/altp2m.h |  4 +++
 3 files changed, 87 insertions(+)

diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
index 9c9876c932..fd455bdbfc 100644
--- a/xen/arch/arm/altp2m.c
+++ b/xen/arch/arm/altp2m.c
@@ -155,6 +155,72 @@ int altp2m_set_mem_access(struct domain *d,
 return rc;
 }
 
+/*
+ * The function altp2m_lazy_copy returns "false" on error.  The return value
+ * "true" signals that either the mapping has been successfully lazy-copied
+ * from the hostp2m to the currently active altp2m view or that the altp2m view
+ * holds already a valid mapping. The latter is the case if multiple vcpus
+ * using the same altp2m view generate a translation fault that is led back in
+ * both cases to the same mapping and the first fault has been already handled.
+ */
+bool altp2m_lazy_copy(struct vcpu *v, gfn_t gfn)
+{
+struct domain *d = v->domain;
+struct p2m_domain *hp2m = p2m_get_hostp2m(d), *ap2m = NULL;
+p2m_type_t p2mt;
+p2m_access_t p2ma;
+mfn_t mfn;
+unsigned int page_order;
+int rc;
+
+ap2m = altp2m_get_altp2m(v);
+if ( unlikely(!ap2m) )
+return false;
+
+/*
+ * Lock hp2m to prevent the hostp2m to change a mapping before it is added
+ * to the altp2m view.
+ */
+p2m_read_lock(hp2m);
+p2m_write_lock(ap2m);
+
+/* Check if entry is part of the altp2m view. */
+mfn = p2m_get_entry(ap2m, gfn, NULL, NULL, NULL);
+
+/*
+ * If multiple vcpus are u

[Xen-devel] [PATCH v4 06/39] arm/p2m: Cosmetic fix - substitute _gfn(ULONG_MAX) for INVALID_GFN

2017-08-30 Thread Sergej Proskurin
In ./xen/arch/arm/p2m.c, we compare the gfn's with INVALID_GFN
throughout the code. Thus it makes sense to use the macro INVALID_GFN
instead of a hard coded value to initialize "p2m->lowest_mapped_gfn".

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
 xen/arch/arm/p2m.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 4334e3bc81..5e86368010 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1238,7 +1238,7 @@ int p2m_init(struct domain *d)
 
 p2m->domain = d;
 p2m->max_mapped_gfn = _gfn(0);
-p2m->lowest_mapped_gfn = _gfn(ULONG_MAX);
+p2m->lowest_mapped_gfn = INVALID_GFN;
 
 p2m->default_access = p2m_access_rwx;
 p2m->mem_access_enabled = false;
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 23/39] arm/p2m: Cosmetic fix - function prototypes

2017-08-30 Thread Sergej Proskurin
This commit changes the prototypes of the following functions:
- p2m_insert_mapping
- p2m_remove_mapping

These changes are required as our implementation reuses most of the
existing ARM p2m implementation to set page table attributes of the
individual altp2m views. Therefore, exiting function prototypes have
been extended to hold another argument (of type struct p2m_domain *).
This allows to specify the p2m/altp2m domain that should be processed by
the individual function -- instead of accessing the host's default p2m
domain.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v2: Adoption of the functions "__p2m_lookup" and "__p2m_get_mem_access"
have been moved out of this commit.
---
 xen/arch/arm/p2m.c | 20 +---
 1 file changed, 9 insertions(+), 11 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 20d7784708..c5bf64aee0 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1012,13 +1012,12 @@ int p2m_set_entry(struct p2m_domain *p2m,
 return rc;
 }
 
-static inline int p2m_insert_mapping(struct domain *d,
+static inline int p2m_insert_mapping(struct p2m_domain *p2m,
  gfn_t start_gfn,
  unsigned long nr,
  mfn_t mfn,
  p2m_type_t t)
 {
-struct p2m_domain *p2m = p2m_get_hostp2m(d);
 int rc;
 
 p2m_write_lock(p2m);
@@ -1028,12 +1027,11 @@ static inline int p2m_insert_mapping(struct domain *d,
 return rc;
 }
 
-static inline int p2m_remove_mapping(struct domain *d,
+static inline int p2m_remove_mapping(struct p2m_domain *p2m,
  gfn_t start_gfn,
  unsigned long nr,
  mfn_t mfn)
 {
-struct p2m_domain *p2m = p2m_get_hostp2m(d);
 int rc;
 
 p2m_write_lock(p2m);
@@ -1050,7 +1048,7 @@ int map_regions_p2mt(struct domain *d,
  mfn_t mfn,
  p2m_type_t p2mt)
 {
-return p2m_insert_mapping(d, gfn, nr, mfn, p2mt);
+return p2m_insert_mapping(p2m_get_hostp2m(d), gfn, nr, mfn, p2mt);
 }
 
 int unmap_regions_p2mt(struct domain *d,
@@ -1058,7 +1056,7 @@ int unmap_regions_p2mt(struct domain *d,
unsigned long nr,
mfn_t mfn)
 {
-return p2m_remove_mapping(d, gfn, nr, mfn);
+return p2m_remove_mapping(p2m_get_hostp2m(d), gfn, nr, mfn);
 }
 
 int map_mmio_regions(struct domain *d,
@@ -1066,7 +1064,7 @@ int map_mmio_regions(struct domain *d,
  unsigned long nr,
  mfn_t mfn)
 {
-return p2m_insert_mapping(d, start_gfn, nr, mfn, p2m_mmio_direct_dev);
+return p2m_insert_mapping(p2m_get_hostp2m(d), start_gfn, nr, mfn, 
p2m_mmio_direct_dev);
 }
 
 int unmap_mmio_regions(struct domain *d,
@@ -1074,7 +1072,7 @@ int unmap_mmio_regions(struct domain *d,
unsigned long nr,
mfn_t mfn)
 {
-return p2m_remove_mapping(d, start_gfn, nr, mfn);
+return p2m_remove_mapping(p2m_get_hostp2m(d), start_gfn, nr, mfn);
 }
 
 int map_dev_mmio_region(struct domain *d,
@@ -1087,7 +1085,7 @@ int map_dev_mmio_region(struct domain *d,
 if ( !(nr && iomem_access_permitted(d, mfn_x(mfn), mfn_x(mfn) + nr - 1)) )
 return 0;
 
-res = p2m_insert_mapping(d, gfn, nr, mfn, p2m_mmio_direct_c);
+res = p2m_insert_mapping(p2m_get_hostp2m(d), gfn, nr, mfn, 
p2m_mmio_direct_c);
 if ( res < 0 )
 {
 printk(XENLOG_G_ERR "Unable to map MFNs [%#"PRI_mfn" - %#"PRI_mfn" in 
Dom%d\n",
@@ -1104,13 +1102,13 @@ int guest_physmap_add_entry(struct domain *d,
 unsigned long page_order,
 p2m_type_t t)
 {
-return p2m_insert_mapping(d, gfn, (1 << page_order), mfn, t);
+return p2m_insert_mapping(p2m_get_hostp2m(d), gfn, (1 << page_order), mfn, 
t);
 }
 
 int guest_physmap_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn,
   unsigned int page_order)
 {
-return p2m_remove_mapping(d, gfn, (1 << page_order), mfn);
+return p2m_remove_mapping(p2m_get_hostp2m(d), gfn, (1 << page_order), mfn);
 }
 
 static int p2m_alloc_table(struct p2m_domain *p2m)
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 31/39] arm/altp2m: Introduce altp2m_switch_vcpu_altp2m_by_id

2017-08-30 Thread Sergej Proskurin
This commit adds the function "altp2m_switch_vcpu_altp2m_by_id" that is
executed after checking whether the vcpu should be switched to a different
altp2m within the function "altp2m_check".

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v3: This commit has been moved out of the commit "arm/p2m: Add altp2m
paging mechanism".

Moved the function "p2m_altp2m_check" from p2m.c to altp2m.c and
renamed it to "altp2m_check". This change required the adoption of
the complementary function in the x86 architecture.

v4: Moved code renaming and movement of ARM and x86 related code out of
this commit.

While parts of this commit have been Acked-by Razvan Cojocaru and
George Dunlap in v3, we have removed the Acks as the previous patch
has been distributed across multiple smaller patches and now needs
to be reviewed again.
---
 xen/arch/arm/altp2m.c| 32 
 xen/include/asm-arm/altp2m.h |  6 +-
 2 files changed, 33 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
index 4883b1323b..9c9876c932 100644
--- a/xen/arch/arm/altp2m.c
+++ b/xen/arch/arm/altp2m.c
@@ -32,6 +32,38 @@ struct p2m_domain *altp2m_get_altp2m(struct vcpu *v)
 return v->domain->arch.altp2m_p2m[idx];
 }
 
+static bool altp2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx)
+{
+struct domain *d = v->domain;
+bool rc = false;
+
+if ( unlikely(idx >= MAX_ALTP2M) )
+return rc;
+
+altp2m_lock(d);
+
+if ( d->arch.altp2m_p2m[idx] != NULL )
+{
+if ( idx != v->arch.ap2m_idx )
+{
+atomic_dec(_get_altp2m(v)->active_vcpus);
+v->arch.ap2m_idx = idx;
+atomic_inc(_get_altp2m(v)->active_vcpus);
+}
+rc = true;
+}
+
+altp2m_unlock(d);
+
+return rc;
+}
+
+void altp2m_check(struct vcpu *v, uint16_t idx)
+{
+if ( altp2m_active(v->domain) )
+altp2m_switch_vcpu_altp2m_by_id(v, idx);
+}
+
 int altp2m_switch_domain_altp2m_by_id(struct domain *d, unsigned int idx)
 {
 struct vcpu *v;
diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index 5a2444e8f8..f9e14ab1dc 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -50,11 +50,7 @@ void altp2m_vcpu_destroy(struct vcpu *v);
 struct p2m_domain *altp2m_get_altp2m(struct vcpu *v);
 
 /* Check to see if vcpu should be switched to a different p2m. */
-static inline
-void altp2m_check(struct vcpu *v, uint16_t idx)
-{
-/* Not supported on ARM. */
-}
+void altp2m_check(struct vcpu *v, uint16_t idx);
 
 /* Switch alternate p2m for entire domain */
 int altp2m_switch_domain_altp2m_by_id(struct domain *d,
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH] xen-access: Correct default value of write-to-CR4 switch

2017-08-30 Thread Sergej Proskurin
The current implementation configures the test environment to always
trap on writes to the CR4 control register, even on ARM. This leads to
issues as calling xc_monitor_write_ctrlreg on ARM with VM_EVENT_X86_CR4
will always fail.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Razvan Cojocaru <rcojoc...@bitdefender.com>
Cc: Tamas K Lengyel <ta...@tklengyel.com>
Cc: Ian Jackson <ian.jack...@eu.citrix.com>
Cc: Wei Liu <wei.l...@citrix.com>
---
 tools/tests/xen-access/xen-access.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/tests/xen-access/xen-access.c 
b/tools/tests/xen-access/xen-access.c
index 1e69e25a16..9d960e2109 100644
--- a/tools/tests/xen-access/xen-access.c
+++ b/tools/tests/xen-access/xen-access.c
@@ -394,7 +394,7 @@ int main(int argc, char *argv[])
 int debug = 0;
 int cpuid = 0;
 int desc_access = 0;
-int write_ctrlreg_cr4 = 1;
+int write_ctrlreg_cr4 = 0;
 uint16_t altp2m_view_id = 0;
 
 char* progname = argv[0];
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v9 11/13] arm/mem_access: Add long-descriptor based gpt

2017-08-16 Thread Sergej Proskurin
This commit adds functionality to walk the guest's page tables using the
long-descriptor translation table format for both ARMv7 and ARMv8.
Similar to the hardware architecture, the implementation supports
different page granularities (4K, 16K, and 64K). The implementation is
based on ARM DDI 0487B.a J1-5922, J1-5999, and ARM DDI 0406C.b B3-1510.

Note that the current implementation lacks support for Large VA/PA on
ARMv8.2 architectures (LVA/LPA, 52-bit virtual and physical address
sizes). The associated location in the code is marked appropriately.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
Acked-by: Julien Grall <julien.gr...@arm.com>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v2: Use TCR_SZ_MASK instead of TTBCR_SZ_MASK for ARM 32-bit guests using
the long-descriptor translation table format.

Cosmetic fixes.

v3: Move the implementation to ./xen/arch/arm/guest_copy.c.

Remove the array strides and declare the array grainsizes as static
const instead of just const to reduce the function stack overhead.

Move parts of the funtion guest_walk_ld into the static functions
get_ttbr_and_gran_64bit and get_top_bit to reduce complexity.

Use the macro BIT(x) instead of (1UL << x).

Add more comments && Cosmetic fixes.

v4: Move functionality responsible for determining the configured IPA
output-size into a separate function get_ipa_output_size. In this
function, we remove the previously used switch statement, which was
responsible for distinguishing between different IPA output-sizes.
Instead, we retrieve the information from the introduced ipa_sizes
array.

Remove the defines GRANULE_SIZE_INDEX_* and TTBR0_VALID from
guest_walk.h. Instead, introduce the enums granule_size_index
active_ttbr directly inside of guest_walk.c so that the associated
fields don't get exported.

Adapt the function to the new parameter of type "struct vcpu *".

Remove support for 52bit IPA output-sizes entirely from this commit.

Use lpae_* helpers instead of p2m_* helpers.

Cosmetic fixes & Additional comments.

v5: Make use of the function vgic_access_guest_memory to read page table
entries in guest memory.

Invert the indeces of the arrays "offsets" and "masks" and simplify
readability by using an appropriate macro for the entries.

Remove remaining CONFIG_ARM_64 #ifdefs.

Remove the use of the macros BITS_PER_WORD and BITS_PER_DOUBLE_WORD.

Use GENMASK_ULL instead of manually creating complex masks to ease
readability.

Also, create a macro CHECK_BASE_SIZE which simply reduces the code
size and simplifies readability.

Make use of the newly introduced lpae_page macro in the if-statement
to test for invalid/reserved mappings in the L3 PTE.

Cosmetic fixes and additional comments.

v6: Convert the macro CHECK_BASE_SIZE into a helper function
check_base_size. The use of the old CHECK_BASE_SIZE was confusing as
it affected the control-flow through a return as part of the macro.

Return the value -EFAULT instead of -EINVAL if access to the guest's
memory fails.

Simplify the check in the end of the table walk that ensures that
the found PTE is a page or a superpage. The new implementation
checks if the pte maps a valid page or a superpage and returns an
-EFAULT only if both conditions are not true.

Adjust the type of the array offsets to paddr_t instead of vaddr_t
to allow working with the changed *_table_offset_* helpers, which
return offsets of type paddr_t.

Make use of renamed function access_guest_memory_by_ipa instead of
vgic_access_guest_memory.

v7: Change the return type of check_base_size to bool as it returns only
two possible values and the caller is interested only whether the call
has succeeded or not.

Use a mask for the computation of the IPA, as the lower values of
the PTE's base address do not need to be zeroed out.

Cosmetic fixes in comments.

v8: By calling access_guest_memory_by_ipa in guest_walk_(ld|sd), we rely
on the p2m->lock (rw_lock) to be recursive. To avoid bugs in the
future implementation, we add a comment in struct p2m_domain to
address this case. Thus, we make the future implementation aware of
the nested use of the lock.

v9: Remove second "to" in a comment.

Add Acked-by Julien Grall.
---
 xen/arch/arm/guest_walk.c | 398 +-
 xen/include/asm-arm/p2m.h |   8 +-
 2 files changed, 403 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/guest_walk.c b/xen/arch/arm/guest_walk.c
index 78badc2949..d0d45ad659 100644
--- a/xen/arch/arm/guest_walk.c
+++ b/xen/arch/arm/guest_walk.c
@@ -15,7 +15,10 @@
  * this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include 
 

[Xen-devel] [PATCH v9 13/13] arm/mem_access: Walk the guest's pt in software

2017-08-16 Thread Sergej Proskurin
In this commit, we make use of the gpt walk functionality introduced in
the previous commits. If mem_access is active, hardware-based gva to ipa
translation might fail, as gva_to_ipa uses the guest's translation
tables, access to which might be restricted by the active VTTBR. To
side-step potential translation errors in the function
p2m_mem_access_check_and_get_page due to restricted memory (e.g. to the
guest's page tables themselves), we walk the guest's page tables in
software.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
Acked-by: Tamas K Lengyel <ta...@tklengyel.com>
---
Cc: Razvan Cojocaru <rcojoc...@bitdefender.com>
Cc: Tamas K Lengyel <ta...@tklengyel.com>
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v2: Check the returned access rights after walking the guest's page tables in
the function p2m_mem_access_check_and_get_page.

v3: Adapt Function names and parameter.

v4: Comment why we need to fail if the permission flags that are
requested by the caller do not satisfy the mapped page.

Cosmetic fix that simplifies the if-statement checking for the
GV2M_WRITE permission.

v5: Move comment to ease code readability.
---
 xen/arch/arm/mem_access.c | 31 ++-
 1 file changed, 30 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/mem_access.c b/xen/arch/arm/mem_access.c
index e0888bbad2..3e2bb4088a 100644
--- a/xen/arch/arm/mem_access.c
+++ b/xen/arch/arm/mem_access.c
@@ -22,6 +22,7 @@
 #include 
 #include 
 #include 
+#include 
 
 static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
 xenmem_access_t *access)
@@ -101,6 +102,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned 
long flag,
   const struct vcpu *v)
 {
 long rc;
+unsigned int perms;
 paddr_t ipa;
 gfn_t gfn;
 mfn_t mfn;
@@ -110,8 +112,35 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned 
long flag,
 struct p2m_domain *p2m = p2m_get_hostp2m(v->domain);
 
 rc = gva_to_ipa(gva, , flag);
+
+/*
+ * In case mem_access is active, hardware-based gva_to_ipa translation
+ * might fail. Since gva_to_ipa uses the guest's translation tables, access
+ * to which might be restricted by the active VTTBR, we perform a gva to
+ * ipa translation in software.
+ */
 if ( rc < 0 )
-goto err;
+{
+/*
+ * The software gva to ipa translation can still fail, e.g., if the gva
+ * is not mapped.
+ */
+if ( guest_walk_tables(v, gva, , ) < 0 )
+goto err;
+
+/*
+ * Check permissions that are assumed by the caller. For instance in
+ * case of guestcopy, the caller assumes that the translated page can
+ * be accessed with requested permissions. If this is not the case, we
+ * should fail.
+ *
+ * Please note that we do not check for the GV2M_EXEC permission. Yet,
+ * since the hardware-based translation through gva_to_ipa does not
+ * test for execute permissions this check can be left out.
+ */
+if ( (flag & GV2M_WRITE) && !(perms & GV2M_WRITE) )
+goto err;
+}
 
 gfn = gaddr_to_gfn(ipa);
 
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v9 09/13] arm/guest_access: Rename vgic_access_guest_memory

2017-08-16 Thread Sergej Proskurin
This commit renames the function vgic_access_guest_memory to
access_guest_memory_by_ipa. As the function name suggests, the functions
expects an IPA as argument. All invocations of this function have been
adapted accordingly. Apart from that, we have adjusted all printk
messages for cleanup and to eliminate artefacts of the function's
previous location.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
Acked-by: Julien Grall <julien.gr...@arm.com>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v6: We added this patch to our patch series.

v7: Renamed the function's argument ipa back to gpa.

Removed any mentioning of "vITS" in the function's printk messages
and adjusted the commit message accordingly.

v9: Added Acked-by Julien Grall.
---
 xen/arch/arm/guestcopy.c   | 10 +-
 xen/arch/arm/vgic-v3-its.c | 36 ++--
 xen/include/asm-arm/guest_access.h |  4 ++--
 3 files changed, 25 insertions(+), 25 deletions(-)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index 938ffe2668..4ee07fcea3 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -123,8 +123,8 @@ unsigned long raw_copy_from_guest(void *to, const void 
__user *from, unsigned le
  * Temporarily map one physical guest page and copy data to or from it.
  * The data to be copied cannot cross a page boundary.
  */
-int vgic_access_guest_memory(struct domain *d, paddr_t gpa, void *buf,
- uint32_t size, bool is_write)
+int access_guest_memory_by_ipa(struct domain *d, paddr_t gpa, void *buf,
+   uint32_t size, bool is_write)
 {
 struct page_info *page;
 uint64_t offset = gpa & ~PAGE_MASK;  /* Offset within the mapped page */
@@ -134,7 +134,7 @@ int vgic_access_guest_memory(struct domain *d, paddr_t gpa, 
void *buf,
 /* Do not cross a page boundary. */
 if ( size > (PAGE_SIZE - offset) )
 {
-printk(XENLOG_G_ERR "d%d: vITS: memory access would cross page 
boundary\n",
+printk(XENLOG_G_ERR "d%d: guestcopy: memory access crosses page 
boundary.\n",
d->domain_id);
 return -EINVAL;
 }
@@ -142,7 +142,7 @@ int vgic_access_guest_memory(struct domain *d, paddr_t gpa, 
void *buf,
 page = get_page_from_gfn(d, paddr_to_pfn(gpa), , P2M_ALLOC);
 if ( !page )
 {
-printk(XENLOG_G_ERR "d%d: vITS: Failed to get table entry\n",
+printk(XENLOG_G_ERR "d%d: guestcopy: failed to get table entry.\n",
d->domain_id);
 return -EINVAL;
 }
@@ -150,7 +150,7 @@ int vgic_access_guest_memory(struct domain *d, paddr_t gpa, 
void *buf,
 if ( !p2m_is_ram(p2mt) )
 {
 put_page(page);
-printk(XENLOG_G_ERR "d%d: vITS: memory used by the ITS should be RAM.",
+printk(XENLOG_G_ERR "d%d: guestcopy: guest memory should be RAM.\n",
d->domain_id);
 return -EINVAL;
 }
diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index 1af6820cab..72a5c70656 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -131,9 +131,9 @@ static int its_set_collection(struct virt_its *its, 
uint16_t collid,
 if ( collid >= its->max_collections )
 return -ENOENT;
 
-return vgic_access_guest_memory(its->d,
-addr + collid * sizeof(coll_table_entry_t),
-_id, sizeof(vcpu_id), true);
+return access_guest_memory_by_ipa(its->d,
+  addr + collid * 
sizeof(coll_table_entry_t),
+  _id, sizeof(vcpu_id), true);
 }
 
 /* Must be called with the ITS lock held. */
@@ -149,9 +149,9 @@ static struct vcpu *get_vcpu_from_collection(struct 
virt_its *its,
 if ( collid >= its->max_collections )
 return NULL;
 
-ret = vgic_access_guest_memory(its->d,
-   addr + collid * sizeof(coll_table_entry_t),
-   _id, sizeof(coll_table_entry_t), 
false);
+ret = access_guest_memory_by_ipa(its->d,
+ addr + collid * 
sizeof(coll_table_entry_t),
+ _id, sizeof(coll_table_entry_t), 
false);
 if ( ret )
 return NULL;
 
@@ -171,9 +171,9 @@ static int its_set_itt_address(struct virt_its *its, 
uint32_t devid,
 if ( devid >= its->max_devices )
 return -ENOENT;
 
-return vgic_access_guest_memory(its->d,
-addr + devid * sizeof(dev_table_entry_t),
-_entry, sizeof(itt_entry), true);
+return access_guest_memory_by_ipa(its->d,
+  addr + devid * sizeof(dev_t

[Xen-devel] [PATCH v9 04/13] arm/mem_access: Add short-descriptor pte typedefs and macros

2017-08-16 Thread Sergej Proskurin
The current implementation does not provide appropriate types for
short-descriptor translation table entries. As such, this commit adds new
types, which simplify managing the respective translation table entries.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
Acked-by: Julien Grall <julien.gr...@arm.com>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v3: Add more short-descriptor related pte typedefs that will be used by
the following commits.

v4: Move short-descriptor pte typedefs out of page.h into short-desc.h.

Change the type unsigned int to bool of every bitfield in
short-descriptor related data-structures that holds only one bit.

Change the typedef names from pte_sd_* to short_desc_*.

v5: Add {L1|L2}DESC_* defines to this commit.

v6: Add Julien Grall's Acked-by.
---
 xen/include/asm-arm/short-desc.h | 130 +++
 1 file changed, 130 insertions(+)
 create mode 100644 xen/include/asm-arm/short-desc.h

diff --git a/xen/include/asm-arm/short-desc.h b/xen/include/asm-arm/short-desc.h
new file mode 100644
index 00..9652a103c4
--- /dev/null
+++ b/xen/include/asm-arm/short-desc.h
@@ -0,0 +1,130 @@
+#ifndef __ARM_SHORT_DESC_H__
+#define __ARM_SHORT_DESC_H__
+
+/*
+ * First level translation table descriptor types used by the AArch32
+ * short-descriptor translation table format.
+ */
+#define L1DESC_INVALID  (0)
+#define L1DESC_PAGE_TABLE   (1)
+#define L1DESC_SECTION  (2)
+#define L1DESC_SECTION_PXN  (3)
+
+/* Defines for section and supersection shifts. */
+#define L1DESC_SECTION_SHIFT(20)
+#define L1DESC_SUPERSECTION_SHIFT   (24)
+#define L1DESC_SUPERSECTION_EXT_BASE1_SHIFT (32)
+#define L1DESC_SUPERSECTION_EXT_BASE2_SHIFT (36)
+
+/* Second level translation table descriptor types. */
+#define L2DESC_INVALID  (0)
+
+/* Defines for small (4K) and large page (64K) shifts. */
+#define L2DESC_SMALL_PAGE_SHIFT (12)
+#define L2DESC_LARGE_PAGE_SHIFT (16)
+
+/*
+ * Comprises bits of the level 1 short-descriptor format representing
+ * a section.
+ */
+typedef struct __packed {
+bool pxn:1; /* Privileged Execute Never */
+bool sec:1; /* == 1 if section or supersection */
+bool b:1;   /* Bufferable */
+bool c:1;   /* Cacheable */
+bool xn:1;  /* Execute Never */
+unsigned int dom:4; /* Domain field */
+bool impl:1;/* Implementation defined */
+unsigned int ap:2;  /* AP[1:0] */
+unsigned int tex:3; /* TEX[2:0] */
+bool ro:1;  /* AP[2] */
+bool s:1;   /* Shareable */
+bool ng:1;  /* Non-global */
+bool supersec:1;/* Must be 0 for sections */
+bool ns:1;  /* Non-secure */
+unsigned int base:12;   /* Section base address */
+} short_desc_l1_sec_t;
+
+/*
+ * Comprises bits of the level 1 short-descriptor format representing
+ * a supersection.
+ */
+typedef struct __packed {
+bool pxn:1; /* Privileged Execute Never */
+bool sec:1; /* == 1 if section or supersection */
+bool b:1;   /* Bufferable */
+bool c:1;   /* Cacheable */
+bool xn:1;  /* Execute Never */
+unsigned int extbase2:4;/* Extended base address, PA[39:36] */
+bool impl:1;/* Implementation defined */
+unsigned int ap:2;  /* AP[1:0] */
+unsigned int tex:3; /* TEX[2:0] */
+bool ro:1;  /* AP[2] */
+bool s:1;   /* Shareable */
+bool ng:1;  /* Non-global */
+bool supersec:1;/* Must be 0 for sections */
+bool ns:1;  /* Non-secure */
+unsigned int extbase1:4;/* Extended base address, PA[35:32] */
+unsigned int base:8;/* Supersection base address */
+} short_desc_l1_supersec_t;
+
+/*
+ * Comprises bits of the level 2 short-descriptor format representing
+ * a small page.
+ */
+typedef struct __packed {
+bool xn:1;  /* Execute Never */
+bool page:1;/* ==1 if small page */
+bool b:1;   /* Bufferable */
+bool c:1;   /* Cacheable */
+unsigned int ap:2;  /* AP[1:0] */
+unsigned int tex:3; /* TEX[2:0] */
+bool ro:1;  /* AP[2] */
+bool s:1;   /* Shareable */
+bool ng:1;  /* Non-global */
+unsigned int base:20;   /* Small page base address */
+} short_desc_l2_page_t;
+
+/*
+ * Comprises bits of the level 2 short-descriptor format representing
+ * a large page.
+ */
+typedef struct __pa

[Xen-devel] [PATCH v9 02/13] arm/mem_access: Add defines supporting PTs with varying page sizes

2017-08-16 Thread Sergej Proskurin
AArch64 supports pages with different (4K, 16K, and 64K) sizes.  To
enable guest page table walks for various configurations, this commit
extends the defines and helpers of the current implementation.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
Reviewed-by: Julien Grall <julien.gr...@arm.com>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v3: Eliminate redundant macro definitions by introducing generic macros.

v4: Replace existing macros with ones that generate static inline
helpers as to ease the readability of the code.

Move the introduced code into lpae.h

v5: Remove PAGE_SHIFT_* defines from lpae.h as we import them now from
the header xen/lib.h.

Remove *_guest_table_offset macros as to reduce the number of
exported macros which are only used once. Instead, use the
associated functionality directly within the
GUEST_TABLE_OFFSET_HELPERS.

Add comment in GUEST_TABLE_OFFSET_HELPERS stating that a page table
with 64K page size granularity does not have a zeroeth lookup level.

Add #undefs for GUEST_TABLE_OFFSET and GUEST_TABLE_OFFSET_HELPERS.

Remove CONFIG_ARM_64 #defines.

v6: Rename *_guest_table_offset_* helpers to *_table_offset_* as they
are sufficiently generic to be applied not only to the guest's page
table walks.

Change the type of the parameter and return value of the
*_table_offset_* helpers from vaddr_t to paddr_t to enable applying
these helpers also for other purposes such as computation of IPA
offsets in second stage translation tables.

v7: Clarify comments in the code and commit message to address AArch64
directly instead of ARMv8 in general.

Rename remaining GUEST_TABLE_* macros into TABLE_* macros, to be
consistent with *_table_offset_* helpers.

Added Reviewed-by Julien Grall.
---
 xen/include/asm-arm/lpae.h | 61 ++
 1 file changed, 61 insertions(+)

diff --git a/xen/include/asm-arm/lpae.h b/xen/include/asm-arm/lpae.h
index a62b118630..efec493313 100644
--- a/xen/include/asm-arm/lpae.h
+++ b/xen/include/asm-arm/lpae.h
@@ -3,6 +3,8 @@
 
 #ifndef __ASSEMBLY__
 
+#include 
+
 /*
  * WARNING!  Unlike the x86 pagetable code, where l1 is the lowest level and
  * l4 is the root of the trie, the ARM pagetables follow ARM's documentation:
@@ -151,6 +153,65 @@ static inline bool lpae_is_superpage(lpae_t pte, unsigned 
int level)
 return (level < 3) && lpae_mapping(pte);
 }
 
+/*
+ * AArch64 supports pages with different sizes (4K, 16K, and 64K). To enable
+ * page table walks for various configurations, the following helpers enable
+ * walking the translation table with varying page size granularities.
+ */
+
+#define LPAE_SHIFT_4K   (9)
+#define LPAE_SHIFT_16K  (11)
+#define LPAE_SHIFT_64K  (13)
+
+#define lpae_entries(gran)  (_AC(1,U) << LPAE_SHIFT_##gran)
+#define lpae_entry_mask(gran)   (lpae_entries(gran) - 1)
+
+#define third_shift(gran)   (PAGE_SHIFT_##gran)
+#define third_size(gran)((paddr_t)1 << third_shift(gran))
+
+#define second_shift(gran)  (third_shift(gran) + LPAE_SHIFT_##gran)
+#define second_size(gran)   ((paddr_t)1 << second_shift(gran))
+
+#define first_shift(gran)   (second_shift(gran) + LPAE_SHIFT_##gran)
+#define first_size(gran)((paddr_t)1 << first_shift(gran))
+
+/* Note that there is no zeroeth lookup level with a 64K granule size. */
+#define zeroeth_shift(gran) (first_shift(gran) + LPAE_SHIFT_##gran)
+#define zeroeth_size(gran)  ((paddr_t)1 << zeroeth_shift(gran))
+
+#define TABLE_OFFSET(offs, gran)  (offs & lpae_entry_mask(gran))
+#define TABLE_OFFSET_HELPERS(gran)  \
+static inline paddr_t third_table_offset_##gran##K(paddr_t va)  \
+{   \
+return TABLE_OFFSET((va >> third_shift(gran##K)), gran##K); \
+}   \
+\
+static inline paddr_t second_table_offset_##gran##K(paddr_t va) \
+{   \
+return TABLE_OFFSET((va >> second_shift(gran##K)), gran##K);\
+}   \
+\
+static inline paddr_t first_table_offset_##gran##K(paddr_t va)  \
+{   \
+return TABLE_OFFSET((va >> first_shift(gran##K)), gran##K); \
+}   \
+ 

[Xen-devel] [PATCH v9 01/13] arm/mem_access: Add and cleanup (TCR_|TTBCR_)* defines

2017-08-16 Thread Sergej Proskurin
This commit adds (TCR_|TTBCR_)* defines to simplify access to the
respective register contents. At the same time, we adjust the macros
TCR_T0SZ and TCR_TG0_* by using the newly introduced TCR_T0SZ_SHIFT and
TCR_TG0_SHIFT instead of the hardcoded values.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
Acked-by: Julien Grall <julien.gr...@arm.com>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v2: Define TCR_SZ_MASK in a way so that it can be also applied to 32-bit guests
using the long-descriptor translation table format.

Extend the previous commit by further defines allowing a simplified access
to the registers TCR_EL1 and TTBCR.

v3: Replace the hardcoded value 0 in the TCR_T0SZ macro with the newly
introduced TCR_T0SZ_SHIFT. Also, replace the hardcoded value 14 in
the TCR_TG0_* macros with the introduced TCR_TG0_SHIFT.

Comment when to apply the defines TTBCR_PD(0|1), according to ARM
DDI 0487B.a and ARM DDI 0406C.b.

Remove TCR_TB_* defines.

Comment when certain TCR_EL2 register fields can be applied.

v4: Cosmetic changes.

v5: Remove the shift by 0 of the TCR_SZ_MASK as it can be applied to
both TCR_T0SZ and TCR_T1SZ (which reside at different offsets).

Adjust commit message to make clear that we do not only add but also
cleanup some TCR_* defines.

v6: Changed the comment of TCR_SZ_MASK as we falsely referenced a
section instead of a page.

Add Julien Grall's Acked-by.
---
 xen/include/asm-arm/processor.h | 69 ++---
 1 file changed, 65 insertions(+), 4 deletions(-)

diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
index ab5225fa6c..bf0e1bd014 100644
--- a/xen/include/asm-arm/processor.h
+++ b/xen/include/asm-arm/processor.h
@@ -94,6 +94,13 @@
 #define TTBCR_N_2KB  _AC(0x03,U)
 #define TTBCR_N_1KB  _AC(0x04,U)
 
+/*
+ * TTBCR_PD(0|1) can be applied only if LPAE is disabled, i.e., TTBCR.EAE==0
+ * (ARM DDI 0487B.a G6-5203 and ARM DDI 0406C.b B4-1722).
+ */
+#define TTBCR_PD0   (_AC(1,U)<<4)
+#define TTBCR_PD1   (_AC(1,U)<<5)
+
 /* SCTLR System Control Register. */
 /* HSCTLR is a subset of this. */
 #define SCTLR_TE(_AC(1,U)<<30)
@@ -154,7 +161,20 @@
 
 /* TCR: Stage 1 Translation Control */
 
-#define TCR_T0SZ(x) ((x)<<0)
+#define TCR_T0SZ_SHIFT  (0)
+#define TCR_T1SZ_SHIFT  (16)
+#define TCR_T0SZ(x) ((x)<<TCR_T0SZ_SHIFT)
+
+/*
+ * According to ARM DDI 0487B.a, TCR_EL1.{T0SZ,T1SZ} (AArch64, page D7-2480)
+ * comprises 6 bits and TTBCR.{T0SZ,T1SZ} (AArch32, page G6-5204) comprises 3
+ * bits following another 3 bits for RES0. Thus, the mask for both registers
+ * should be 0x3f.
+ */
+#define TCR_SZ_MASK (_AC(0x3f,UL))
+
+#define TCR_EPD0(_AC(0x1,UL)<<7)
+#define TCR_EPD1(_AC(0x1,UL)<<23)
 
 #define TCR_IRGN0_NC(_AC(0x0,UL)<<8)
 #define TCR_IRGN0_WBWA  (_AC(0x1,UL)<<8)
@@ -170,9 +190,50 @@
 #define TCR_SH0_OS  (_AC(0x2,UL)<<12)
 #define TCR_SH0_IS  (_AC(0x3,UL)<<12)
 
-#define TCR_TG0_4K  (_AC(0x0,UL)<<14)
-#define TCR_TG0_64K (_AC(0x1,UL)<<14)
-#define TCR_TG0_16K (_AC(0x2,UL)<<14)
+/* Note that the fields TCR_EL1.{TG0,TG1} are not available on AArch32. */
+#define TCR_TG0_SHIFT   (14)
+#define TCR_TG0_MASK(_AC(0x3,UL)<<TCR_TG0_SHIFT)
+#define TCR_TG0_4K  (_AC(0x0,UL)<<TCR_TG0_SHIFT)
+#define TCR_TG0_64K (_AC(0x1,UL)<<TCR_TG0_SHIFT)
+#define TCR_TG0_16K (_AC(0x2,UL)<<TCR_TG0_SHIFT)
+
+/* Note that the field TCR_EL2.TG1 exists only if HCR_EL2.E2H==1. */
+#define TCR_EL1_TG1_SHIFT   (30)
+#define TCR_EL1_TG1_MASK(_AC(0x3,UL)<<TCR_EL1_TG1_SHIFT)
+#define TCR_EL1_TG1_16K (_AC(0x1,UL)<<TCR_EL1_TG1_SHIFT)
+#define TCR_EL1_TG1_4K  (_AC(0x2,UL)<<TCR_EL1_TG1_SHIFT)
+#define TCR_EL1_TG1_64K (_AC(0x3,UL)<<TCR_EL1_TG1_SHIFT)
+
+/*
+ * Note that the field TCR_EL1.IPS is not available on AArch32. Also, the field
+ * TCR_EL2.IPS exists only if HCR_EL2.E2H==1.
+ */
+#define TCR_EL1_IPS_SHIFT   (32)
+#define TCR_EL1_IPS_MASK(_AC(0x7,ULL)<<TCR_EL1_IPS_SHIFT)
+#define TCR_EL1_IPS_32_BIT  (_AC(0x0,ULL)<<TCR_EL1_IPS_SHIFT)
+#define TCR_EL1_IPS_36_BIT  (_AC(0x1,ULL)<<TCR_EL1_IPS_SHIFT)
+#define TCR_EL1_IPS_40_BIT  (_AC(0x2,ULL)<<TCR_EL1_IPS_SHIFT)
+#define TCR_EL1_IPS_42_BIT  (_AC(0x3,ULL)<<TCR_EL1_IPS_SHIFT)
+#define TCR_EL1_IPS_44_BIT  (_AC(0x4,ULL)<<TCR_EL1_IPS_SHIFT)
+#define TCR_EL1_IPS_48_BIT  (_AC(0x5,ULL)<<TCR_EL1_IPS_SHIFT)
+#define TCR_EL1_IPS_52_BIT  (_AC(0x6,ULL)<<TCR_EL1_IPS_SHIFT)
+
+/*
+ * The following values correspond to the bit masks represented by
+ * TCR_EL1_IPS_XX_BIT defines.
+ */
+#define TCR_EL1_IPS_32_BIT_VAL  (32)
+#define TCR_EL1_IPS_36_BIT_VAL  (36)
+#define TCR_EL1_IPS_40_BIT_VAL  (40)
+#define TCR_EL1_IPS

[Xen-devel] [PATCH v9 07/13] arm/mem_access: Introduce GENMASK_ULL bit operation

2017-08-16 Thread Sergej Proskurin
The current implementation of GENMASK is capable of creating bitmasks of
32-bit values on AArch32 and 64-bit values on AArch64. As we need to
create masks for 64-bit values on AArch32 as well, in this commit we
introduce the GENMASK_ULL bit operation. Please note that the
GENMASK_ULL implementation has been lifted from the linux kernel source
code.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
Reviewed-by: Stefano Stabellini <sstabell...@kernel.org>
---
Cc: Andrew Cooper <andrew.coop...@citrix.com>
Cc: George Dunlap <george.dun...@eu.citrix.com>
Cc: Ian Jackson <ian.jack...@eu.citrix.com>
Cc: Jan Beulich <jbeul...@suse.com>
Cc: Julien Grall <julien.gr...@arm.com>
Cc: Konrad Rzeszutek Wilk <konrad.w...@oracle.com>
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Tim Deegan <t...@xen.org>
Cc: Wei Liu <wei.l...@citrix.com>
---
v6: As similar patches have been already submitted and NACKED in the
past, we resubmit this patch with 'THE REST' maintainers in Cc to
discuss whether this patch shall be applied into common or put into
ARM related code.

v7: Change the introduced macro BITS_PER_LONG_LONG to BITS_PER_LLONG.

Define BITS_PER_LLONG also in asm-x86/config.h in order to allow
global usage of the introduced macro GENMASK_ULL.

Remove previously unintended whitespace elimination in the function
get_bitmask_order as it is not the right patch to address cleanup.

v9: Add Reviewed-by Stenafo Stabellini.
---
 xen/include/asm-arm/config.h | 2 ++
 xen/include/asm-x86/config.h | 2 ++
 xen/include/xen/bitops.h | 3 +++
 3 files changed, 7 insertions(+)

diff --git a/xen/include/asm-arm/config.h b/xen/include/asm-arm/config.h
index 5b6f3c985d..7da94698e1 100644
--- a/xen/include/asm-arm/config.h
+++ b/xen/include/asm-arm/config.h
@@ -19,6 +19,8 @@
 #define BITS_PER_LONG (BYTES_PER_LONG << 3)
 #define POINTER_ALIGN BYTES_PER_LONG
 
+#define BITS_PER_LLONG 64
+
 /* xen_ulong_t is always 64 bits */
 #define BITS_PER_XEN_ULONG 64
 
diff --git a/xen/include/asm-x86/config.h b/xen/include/asm-x86/config.h
index 25af085af0..0130ac864f 100644
--- a/xen/include/asm-x86/config.h
+++ b/xen/include/asm-x86/config.h
@@ -15,6 +15,8 @@
 #define BITS_PER_BYTE 8
 #define POINTER_ALIGN BYTES_PER_LONG
 
+#define BITS_PER_LLONG 64
+
 #define BITS_PER_XEN_ULONG BITS_PER_LONG
 
 #define CONFIG_PAGING_ASSISTANCE 1
diff --git a/xen/include/xen/bitops.h b/xen/include/xen/bitops.h
index bd0883ab22..e2019b02a3 100644
--- a/xen/include/xen/bitops.h
+++ b/xen/include/xen/bitops.h
@@ -10,6 +10,9 @@
 #define GENMASK(h, l) \
 (((~0UL) << (l)) & (~0UL >> (BITS_PER_LONG - 1 - (h
 
+#define GENMASK_ULL(h, l) \
+(((~0ULL) << (l)) & (~0ULL >> (BITS_PER_LLONG - 1 - (h
+
 /*
  * ffs: find first bit set. This is defined the same way as
  * the libc and compiler builtin ffs routines, therefore
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v9 08/13] arm/guest_access: Move vgic_access_guest_memory to guest_access.h

2017-08-16 Thread Sergej Proskurin
This commit moves the function vgic_access_guest_memory to guestcopy.c
and the header asm/guest_access.h. No functional changes are made.
Please note that the function will be renamed in the following commit.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
Acked-by: Julien Grall <julien.gr...@arm.com>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v6: We added this patch to our patch series.

v7: Add Acked-by Julien Grall.

v9: Include  in  to fix build issues
due to missing type information.
---
 xen/arch/arm/guestcopy.c   | 50 ++
 xen/arch/arm/vgic-v3-its.c |  1 +
 xen/arch/arm/vgic.c| 49 -
 xen/include/asm-arm/guest_access.h |  4 +++
 xen/include/asm-arm/vgic.h |  3 ---
 5 files changed, 55 insertions(+), 52 deletions(-)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index 413125f02b..938ffe2668 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -118,6 +118,56 @@ unsigned long raw_copy_from_guest(void *to, const void 
__user *from, unsigned le
 }
 return 0;
 }
+
+/*
+ * Temporarily map one physical guest page and copy data to or from it.
+ * The data to be copied cannot cross a page boundary.
+ */
+int vgic_access_guest_memory(struct domain *d, paddr_t gpa, void *buf,
+ uint32_t size, bool is_write)
+{
+struct page_info *page;
+uint64_t offset = gpa & ~PAGE_MASK;  /* Offset within the mapped page */
+p2m_type_t p2mt;
+void *p;
+
+/* Do not cross a page boundary. */
+if ( size > (PAGE_SIZE - offset) )
+{
+printk(XENLOG_G_ERR "d%d: vITS: memory access would cross page 
boundary\n",
+   d->domain_id);
+return -EINVAL;
+}
+
+page = get_page_from_gfn(d, paddr_to_pfn(gpa), , P2M_ALLOC);
+if ( !page )
+{
+printk(XENLOG_G_ERR "d%d: vITS: Failed to get table entry\n",
+   d->domain_id);
+return -EINVAL;
+}
+
+if ( !p2m_is_ram(p2mt) )
+{
+put_page(page);
+printk(XENLOG_G_ERR "d%d: vITS: memory used by the ITS should be RAM.",
+   d->domain_id);
+return -EINVAL;
+}
+
+p = __map_domain_page(page);
+
+if ( is_write )
+memcpy(p + offset, buf, size);
+else
+memcpy(buf, p + offset, size);
+
+unmap_domain_page(p);
+put_page(page);
+
+return 0;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index 9ef792f479..1af6820cab 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -39,6 +39,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 1e5107b9f8..7a4e3cdc88 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -638,55 +638,6 @@ void vgic_free_virq(struct domain *d, unsigned int virq)
 }
 
 /*
- * Temporarily map one physical guest page and copy data to or from it.
- * The data to be copied cannot cross a page boundary.
- */
-int vgic_access_guest_memory(struct domain *d, paddr_t gpa, void *buf,
- uint32_t size, bool is_write)
-{
-struct page_info *page;
-uint64_t offset = gpa & ~PAGE_MASK;  /* Offset within the mapped page */
-p2m_type_t p2mt;
-void *p;
-
-/* Do not cross a page boundary. */
-if ( size > (PAGE_SIZE - offset) )
-{
-printk(XENLOG_G_ERR "d%d: vITS: memory access would cross page 
boundary\n",
-   d->domain_id);
-return -EINVAL;
-}
-
-page = get_page_from_gfn(d, paddr_to_pfn(gpa), , P2M_ALLOC);
-if ( !page )
-{
-printk(XENLOG_G_ERR "d%d: vITS: Failed to get table entry\n",
-   d->domain_id);
-return -EINVAL;
-}
-
-if ( !p2m_is_ram(p2mt) )
-{
-put_page(page);
-printk(XENLOG_G_ERR "d%d: vITS: memory used by the ITS should be RAM.",
-   d->domain_id);
-return -EINVAL;
-}
-
-p = __map_domain_page(page);
-
-if ( is_write )
-memcpy(p + offset, buf, size);
-else
-memcpy(buf, p + offset, size);
-
-unmap_domain_page(p);
-put_page(page);
-
-return 0;
-}
-
-/*
  * Local variables:
  * mode: C
  * c-file-style: "BSD"
diff --git a/xen/include/asm-arm/guest_access.h 
b/xen/include/asm-arm/guest_access.h
index 251e935597..df5737cbe4 100644
--- a/xen/include/asm-arm/guest_access.h
+++ b/xen/include/asm-arm/guest_access.h
@@ -3,6 +3,7 @@
 
 #include 
 #include 
+#include 
 
 unsigned long raw_copy_to_guest(void *to, const void *from, unsigned len);
 unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
@@ -10,6 +11,9 @@ unsigned long raw_copy_

[Xen-devel] [PATCH v9 03/13] arm/lpae: Introduce lpae_is_page helper

2017-08-16 Thread Sergej Proskurin
This commit introduces a new helper that checks whether the target PTE
holds a page mapping or not. This helper will be used as part of the
following commits.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
Reviewed-by: Julien Grall <julien.gr...@arm.com>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v6: Change the name of the lpae_page helper to lpae_is_page.

Add Julien Grall's Reviewed-by.
---
 xen/include/asm-arm/lpae.h | 5 +
 1 file changed, 5 insertions(+)

diff --git a/xen/include/asm-arm/lpae.h b/xen/include/asm-arm/lpae.h
index efec493313..118ee5ae1a 100644
--- a/xen/include/asm-arm/lpae.h
+++ b/xen/include/asm-arm/lpae.h
@@ -153,6 +153,11 @@ static inline bool lpae_is_superpage(lpae_t pte, unsigned 
int level)
 return (level < 3) && lpae_mapping(pte);
 }
 
+static inline bool lpae_is_page(lpae_t pte, unsigned int level)
+{
+return (level == 3) && lpae_valid(pte) && pte.walk.table;
+}
+
 /*
  * AArch64 supports pages with different sizes (4K, 16K, and 64K). To enable
  * page table walks for various configurations, the following helpers enable
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v9 10/13] arm/mem_access: Add software guest-page-table walk

2017-08-16 Thread Sergej Proskurin
The function p2m_mem_access_check_and_get_page in mem_access.c
translates a gva to an ipa by means of the hardware functionality of the
ARM architecture. This is implemented in the function gva_to_ipa. If
mem_access is active, hardware-based gva to ipa translation might fail,
as gva_to_ipa uses the guest's translation tables, access to which might
be restricted by the active VTTBR. To address this issue, in this commit
we add a software-based guest-page-table walk, which will be used by the
function p2m_mem_access_check_and_get_page perform the gva to ipa
translation in software in one of the following commits.

Note: The introduced function guest_walk_tables assumes that the domain,
the gva of which is to be translated, is running on the currently active
vCPU. To walk the guest's page tables on a different vCPU, the following
registers would need to be loaded: TCR_EL1, TTBR0_EL1, TTBR1_EL1, and
SCTLR_EL1.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
Acked-by: Julien Grall <julien.gr...@arm.com>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v2: Rename p2m_gva_to_ipa to p2m_walk_gpt and move it to p2m.c.

Move the functionality responsible for walking long-descriptor based
translation tables out of the function p2m_walk_gpt. Also move out
the long-descriptor based translation out of this commit.

Change function parameters in order to return access access rights
to a requested gva.

Cosmetic fixes.

v3: Rename the introduced functions to guest_walk_(tables|sd|ld) and
move the implementation to guest_copy.(c|h).

Set permissions in guest_walk_tables also if the MMU is disabled.

Change the function parameter of type "struct p2m_domain *" to
"struct vcpu *" in the function guest_walk_tables.

v4: Change the function parameter of type "struct p2m_domain *" to
"struct vcpu *" in the functions guest_walk_(sd|ld) as well.

v5: Merge two if-statements in guest_walk_tables to ease readability.

Set perms to GV2M_READ as to avoid undefined permissions.

Add Julien Grall's Acked-by.

v6: Adjusted change-log of v5.

Remove Julien Grall's Acked-by as we have changed the initialization
of perms. This needs to be reviewed.

Comment why we initialize perms with GV2M_READ by default. This is
due to the fact that in the current implementation we assume a GVA
to IPA translation with EL1 privileges. Since, valid mappings in the
first stage address translation table are readable by default for
EL1, we initialize perms with GV2M_READ and extend the permissions
according to the particular page table walk.

v7: Add Acked-by Julien Grall.
---
 xen/arch/arm/Makefile|  1 +
 xen/arch/arm/guest_walk.c| 99 
 xen/include/asm-arm/guest_walk.h | 19 
 3 files changed, 119 insertions(+)
 create mode 100644 xen/arch/arm/guest_walk.c
 create mode 100644 xen/include/asm-arm/guest_walk.h

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 49e1fb2f84..282d2c2949 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -21,6 +21,7 @@ obj-$(CONFIG_HAS_GICV3) += gic-v3.o
 obj-$(CONFIG_HAS_ITS) += gic-v3-its.o
 obj-$(CONFIG_HAS_ITS) += gic-v3-lpi.o
 obj-y += guestcopy.o
+obj-y += guest_walk.o
 obj-y += hvm.o
 obj-y += io.o
 obj-y += irq.o
diff --git a/xen/arch/arm/guest_walk.c b/xen/arch/arm/guest_walk.c
new file mode 100644
index 00..78badc2949
--- /dev/null
+++ b/xen/arch/arm/guest_walk.c
@@ -0,0 +1,99 @@
+/*
+ * Guest page table walk
+ * Copyright (c) 2017 Sergej Proskurin <prosku...@sec.in.tum.de>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include 
+
+/*
+ * The function guest_walk_sd translates a given GVA into an IPA using the
+ * short-descriptor translation table format in software. This function assumes
+ * that the domain is running on the currently active vCPU. To walk the guest's
+ * page table on a different vCPU, the following registers would need to be
+ * loaded: TCR_EL1, TTBR0_EL1, TTBR1_EL1, and SCTLR_EL1.
+ */
+static int guest_walk_sd(const struct vcpu *v,
+ vaddr_t gva, paddr_t *ipa,
+ unsigned int *perms)
+{
+/* Not implemented yet. */
+return -EFAULT;
+}
+
+/*
+ * The function guest_walk_ld 

[Xen-devel] [PATCH v9 06/13] arm/mem_access: Introduce BIT_ULL bit operation

2017-08-16 Thread Sergej Proskurin
We introduce the BIT_ULL macro to using values of unsigned long long as
to enable setting bits of 64-bit registers on AArch32.  In addition,
this commit adds a define holding the register width of 64 bit
double-word registers. This define simplifies using the associated
constants in the following commits.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
Reviewed-by: Julien Grall <julien.gr...@arm.com>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v4: We reused the previous commit with the msg "arm/mem_access: Add
defines holding the width of 32/64bit regs" from v3, as we can reuse
the already existing define BITS_PER_WORD.

v5: Introduce a new macro BIT_ULL instead of changing the type of the
macro BIT.

Remove the define BITS_PER_DOUBLE_WORD.

v6: Add Julien Grall's Reviewed-by.
---
 xen/include/asm-arm/bitops.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/include/asm-arm/bitops.h b/xen/include/asm-arm/bitops.h
index bda889841b..1cbfb9edb2 100644
--- a/xen/include/asm-arm/bitops.h
+++ b/xen/include/asm-arm/bitops.h
@@ -24,6 +24,7 @@
 #define BIT(nr) (1UL << (nr))
 #define BIT_MASK(nr)(1UL << ((nr) % BITS_PER_WORD))
 #define BIT_WORD(nr)((nr) / BITS_PER_WORD)
+#define BIT_ULL(nr) (1ULL << (nr))
 #define BITS_PER_BYTE   8
 
 #define ADDR (*(volatile int *) addr)
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v9 00/13] arm/mem_access: Walk guest page tables in SW

2017-08-16 Thread Sergej Proskurin
The function p2m_mem_access_check_and_get_page is called from the
function get_page_from_gva if mem_access is active and the
hardware-aided translation of the given guest virtual address (gva) into
machine address fails. That is, if the stage-2 translation tables
constrain access to the guests's page tables, hardware-assisted
translation will fail. The idea of the function
p2m_mem_access_check_and_get_page is thus to translate the given gva and
check the requested access rights in software. However, as the current
implementation of p2m_mem_access_check_and_get_page makes use of the
hardware-aided gva to ipa translation, the translation might also fail
because of reasons stated above and will become equally relevant for the
altp2m implementation on ARM.  As such, we provide a software guest
translation table walk to address the above mentioned issue.

The current version of the implementation supports translation of both
the short-descriptor as well as the long-descriptor translation table
format on ARMv7 and ARMv8 (AArch32/AArch64).

This revised version incorporates the comments of the previous patch
series, which mainly comprise minor cosmetic fixes. All changes have
been discussed with the associated maintainers and accordingly stated in
the individual patches.

The following patch series can be found on Github[0].

Cheers,
~Sergej

[0] https://github.com/sergej-proskurin/xen (branch arm-gpt-walk-v9)

Sergej Proskurin (13):
  arm/mem_access: Add and cleanup (TCR_|TTBCR_)* defines
  arm/mem_access: Add defines supporting PTs with varying page sizes
  arm/lpae: Introduce lpae_is_page helper
  arm/mem_access: Add short-descriptor pte typedefs and macros
  arm/mem_access: Introduce GV2M_EXEC permission
  arm/mem_access: Introduce BIT_ULL bit operation
  arm/mem_access: Introduce GENMASK_ULL bit operation
  arm/guest_access: Move vgic_access_guest_memory to guest_access.h
  arm/guest_access: Rename vgic_access_guest_memory
  arm/mem_access: Add software guest-page-table walk
  arm/mem_access: Add long-descriptor based gpt
  arm/mem_access: Add short-descriptor based gpt
  arm/mem_access: Walk the guest's pt in software

 xen/arch/arm/Makefile  |   1 +
 xen/arch/arm/guest_walk.c  | 636 +
 xen/arch/arm/guestcopy.c   |  50 +++
 xen/arch/arm/mem_access.c  |  31 +-
 xen/arch/arm/vgic-v3-its.c |  37 +--
 xen/arch/arm/vgic.c|  49 ---
 xen/include/asm-arm/bitops.h   |   1 +
 xen/include/asm-arm/config.h   |   2 +
 xen/include/asm-arm/guest_access.h |   4 +
 xen/include/asm-arm/guest_walk.h   |  19 ++
 xen/include/asm-arm/lpae.h |  66 
 xen/include/asm-arm/p2m.h  |   8 +-
 xen/include/asm-arm/page.h |   1 +
 xen/include/asm-arm/processor.h|  69 +++-
 xen/include/asm-arm/short-desc.h   | 130 
 xen/include/asm-arm/vgic.h |   3 -
 xen/include/asm-x86/config.h   |   2 +
 xen/include/xen/bitops.h   |   3 +
 18 files changed, 1036 insertions(+), 76 deletions(-)
 create mode 100644 xen/arch/arm/guest_walk.c
 create mode 100644 xen/include/asm-arm/guest_walk.h
 create mode 100644 xen/include/asm-arm/short-desc.h

-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v9 05/13] arm/mem_access: Introduce GV2M_EXEC permission

2017-08-16 Thread Sergej Proskurin
We extend the current implementation by an additional permission,
GV2M_EXEC, which will be used to describe execute permissions of PTE's
as part of our guest translation table walk implementation.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
Acked-by: Julien Grall <julien.gr...@arm.com>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
 xen/include/asm-arm/page.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index cef2f28914..b8d641bfaf 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -90,6 +90,7 @@
 /* Flags for get_page_from_gva, gvirt_to_maddr etc */
 #define GV2M_READ  (0u<<0)
 #define GV2M_WRITE (1u<<0)
+#define GV2M_EXEC  (1u<<1)
 
 #ifndef __ASSEMBLY__
 
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v9 12/13] arm/mem_access: Add short-descriptor based gpt

2017-08-16 Thread Sergej Proskurin
This commit adds functionality to walk the guest's page tables using the
short-descriptor translation table format for both ARMv7 and ARMv8. The
implementation is based on ARM DDI 0487B-a J1-6002 and ARM DDI 0406C-b
B3-1506.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
Acked-by: Julien Grall <julien.gr...@arm.com>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v3: Move the implementation to ./xen/arch/arm/guest_copy.c.

Use defines instead of hardcoded values.

Cosmetic fixes & Added more coments.

v4: Adjusted the names of short-descriptor data-types.

Adapt the function to the new parameter of type "struct vcpu *".

Cosmetic fixes.

v5: Make use of the function vgic_access_guest_memory read page table
entries in guest memory. At the same time, eliminate the offsets
array, as there is no need for an array. Instead, we apply the
associated masks to compute the GVA offsets directly in the code.

Use GENMASK to compute complex masks to ease code readability.

Use the type uint32_t for the TTBR register.

Make use of L2DESC_{SMALL|LARGE}_PAGE_SHIFT instead of
PAGE_SHIFT_{4K|64K} macros.

Remove {L1|L2}DESC_* defines from this commit.

Add comments and cosmetic fixes.

v6: Remove the variable level from the function guest_walk_sd as it is a
left-over from previous commits and is not used anymore.

Remove the falsely added issue that applied the mask to the gva
using the %-operator in the L1DESC_PAGE_TABLE case. Instead, use the
&-operator as it should have been done in the first place.

Make use of renamed function access_guest_memory_by_ipa instead of
vgic_access_guest_memory.

v7: Added Acked-by Julien Grall.

v8: We cast pte.*.base to paddr_t to cope with C type promotion of
types smaller than int. Otherwise pte.*.base would be casted to
int and subsequently sign extended, thus leading to a wrong value.
---
 xen/arch/arm/guest_walk.c | 147 +-
 1 file changed, 145 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/guest_walk.c b/xen/arch/arm/guest_walk.c
index d0d45ad659..c38bedcf65 100644
--- a/xen/arch/arm/guest_walk.c
+++ b/xen/arch/arm/guest_walk.c
@@ -19,6 +19,7 @@
 #include 
 #include 
 #include 
+#include 
 
 /*
  * The function guest_walk_sd translates a given GVA into an IPA using the
@@ -31,8 +32,150 @@ static int guest_walk_sd(const struct vcpu *v,
  vaddr_t gva, paddr_t *ipa,
  unsigned int *perms)
 {
-/* Not implemented yet. */
-return -EFAULT;
+int ret;
+bool disabled = true;
+uint32_t ttbr;
+paddr_t mask, paddr;
+short_desc_t pte;
+register_t ttbcr = READ_SYSREG(TCR_EL1);
+unsigned int n = ttbcr & TTBCR_N_MASK;
+struct domain *d = v->domain;
+
+mask = GENMASK_ULL(31, (32 - n));
+
+if ( n == 0 || !(gva & mask) )
+{
+/*
+ * Use TTBR0 for GVA to IPA translation.
+ *
+ * Note that on AArch32, the TTBR0_EL1 register is 32-bit wide.
+ * Nevertheless, we have to use the READ_SYSREG64 macro, as it is
+ * required for reading TTBR0_EL1.
+ */
+ttbr = READ_SYSREG64(TTBR0_EL1);
+
+/* If TTBCR.PD0 is set, translations using TTBR0 are disabled. */
+disabled = ttbcr & TTBCR_PD0;
+}
+else
+{
+/*
+ * Use TTBR1 for GVA to IPA translation.
+ *
+ * Note that on AArch32, the TTBR1_EL1 register is 32-bit wide.
+ * Nevertheless, we have to use the READ_SYSREG64 macro, as it is
+ * required for reading TTBR1_EL1.
+ */
+ttbr = READ_SYSREG64(TTBR1_EL1);
+
+/* If TTBCR.PD1 is set, translations using TTBR1 are disabled. */
+disabled = ttbcr & TTBCR_PD1;
+
+/*
+ * TTBR1 translation always works like n==0 TTBR0 translation (ARM DDI
+ * 0487B.a J1-6003).
+ */
+n = 0;
+}
+
+if ( disabled )
+return -EFAULT;
+
+/*
+ * The address of the L1 descriptor for the initial lookup has the
+ * following format: [ttbr<31:14-n>:gva<31-n:20>:00] (ARM DDI 0487B.a
+ * J1-6003). Note that the following GPA computation already considers that
+ * the first level address translation might comprise up to four
+ * consecutive pages and does not need to be page-aligned if n > 2.
+ */
+mask = GENMASK(31, (14 - n));
+paddr = (ttbr & mask);
+
+mask = GENMASK((31 - n), 20);
+paddr |= (gva & mask) >> 18;
+
+/* Access the guest's memory to read only one PTE. */
+ret = access_guest_memory_by_ipa(d, paddr, , sizeof(short_desc_t), 
false);
+if ( ret )
+return -EINVAL;
+
+switch ( pte.walk.dt )
+{
+case L1DESC_INVALID:
+return -EFAULT;
+
+case L1DESC_PAG

Re: [Xen-devel] [PATCH v8 08/13] arm/guest_access: Move vgic_access_guest_memory to guest_access.h

2017-08-16 Thread Sergej Proskurin


On 08/16/2017 12:11 PM, Julien Grall wrote:
>
>
> On 16/08/17 10:58, Sergej Proskurin wrote:
>> Hi Julien,
>>
>>
>> On 08/09/2017 10:20 AM, Sergej Proskurin wrote:
>>> This commit moves the function vgic_access_guest_memory to guestcopy.c
>>> and the header asm/guest_access.h. No functional changes are made.
>>> Please note that the function will be renamed in the following commit.
>>>
>>> Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
>>> Acked-by: Julien Grall <julien.gr...@arm.com>
>>> ---
>>> Cc: Stefano Stabellini <sstabell...@kernel.org>
>>> Cc: Julien Grall <julien.gr...@arm.com>
>>> ---
>>> v6: We added this patch to our patch series.
>>>
>>> v7: Add Acked-by Julien Grall.
>>
>> [...]
>>
>>> diff --git a/xen/include/asm-arm/guest_access.h
>>> b/xen/include/asm-arm/guest_access.h
>>> index 251e935597..49716501a4 100644
>>> --- a/xen/include/asm-arm/guest_access.h
>>> +++ b/xen/include/asm-arm/guest_access.h
>>> @@ -10,6 +10,9 @@ unsigned long raw_copy_to_guest_flush_dcache(void
>>> *to, const void *from,
>>>  unsigned long raw_copy_from_guest(void *to, const void *from,
>>> unsigned len);
>>>  unsigned long raw_clear_guest(void *to, unsigned len);
>>>
>>> +int vgic_access_guest_memory(struct domain *d, paddr_t gpa, void *buf,
>>> + uint32_t size, bool_t is_write);
>>> +
>>>  #define __raw_copy_to_guest raw_copy_to_guest
>>>  #define __raw_copy_from_guest raw_copy_from_guest
>>>  #define __raw_clear_guest raw_clear_guest
>>> diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
>>> index d4ed23df28..e489d0bf21 100644
>>> --- a/xen/include/asm-arm/vgic.h
>>> +++ b/xen/include/asm-arm/vgic.h
>>> @@ -217,9 +217,6 @@ extern void register_vgic_ops(struct domain *d,
>>> const struct vgic_ops *ops);
>>>  int vgic_v2_init(struct domain *d, int *mmio_count);
>>>  int vgic_v3_init(struct domain *d, int *mmio_count);
>>>
>>> -int vgic_access_guest_memory(struct domain *d, paddr_t gpa, void *buf,
>>> - uint32_t size, bool_t is_write);
>>> -
>>>  extern int domain_vgic_register(struct domain *d, int *mmio_count);
>>>  extern int vcpu_vgic_free(struct vcpu *v);
>>>  extern bool vgic_to_sgi(struct vcpu *v, register_t sgir,
>>
>> As Stefano and Andrew mentioned in patch 11/13, due to a recent patch in
>> staging, the upper patch failes building due to a missing declaration of
>> struct domain in . This can be easily fixed by adding a
>> forward declaration to struct domain right above
>> vgic_access_guest_memory in  as you will find in the
>> following patch.
>
> Why the forward declaration and not directly including xen/sched.h?

Yeap, that works too :)

Thanks,
~Sergej

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v8 08/13] arm/guest_access: Move vgic_access_guest_memory to guest_access.h

2017-08-16 Thread Sergej Proskurin
Hi Julien,


On 08/09/2017 10:20 AM, Sergej Proskurin wrote:
> This commit moves the function vgic_access_guest_memory to guestcopy.c
> and the header asm/guest_access.h. No functional changes are made.
> Please note that the function will be renamed in the following commit.
>
> Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
> Acked-by: Julien Grall <julien.gr...@arm.com>
> ---
> Cc: Stefano Stabellini <sstabell...@kernel.org>
> Cc: Julien Grall <julien.gr...@arm.com>
> ---
> v6: We added this patch to our patch series.
>
> v7: Add Acked-by Julien Grall.

[...]

> diff --git a/xen/include/asm-arm/guest_access.h 
> b/xen/include/asm-arm/guest_access.h
> index 251e935597..49716501a4 100644
> --- a/xen/include/asm-arm/guest_access.h
> +++ b/xen/include/asm-arm/guest_access.h
> @@ -10,6 +10,9 @@ unsigned long raw_copy_to_guest_flush_dcache(void *to, 
> const void *from,
>  unsigned long raw_copy_from_guest(void *to, const void *from, unsigned len);
>  unsigned long raw_clear_guest(void *to, unsigned len);
>  
> +int vgic_access_guest_memory(struct domain *d, paddr_t gpa, void *buf,
> + uint32_t size, bool_t is_write);
> +
>  #define __raw_copy_to_guest raw_copy_to_guest
>  #define __raw_copy_from_guest raw_copy_from_guest
>  #define __raw_clear_guest raw_clear_guest
> diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
> index d4ed23df28..e489d0bf21 100644
> --- a/xen/include/asm-arm/vgic.h
> +++ b/xen/include/asm-arm/vgic.h
> @@ -217,9 +217,6 @@ extern void register_vgic_ops(struct domain *d, const 
> struct vgic_ops *ops);
>  int vgic_v2_init(struct domain *d, int *mmio_count);
>  int vgic_v3_init(struct domain *d, int *mmio_count);
>  
> -int vgic_access_guest_memory(struct domain *d, paddr_t gpa, void *buf,
> - uint32_t size, bool_t is_write);
> -
>  extern int domain_vgic_register(struct domain *d, int *mmio_count);
>  extern int vcpu_vgic_free(struct vcpu *v);
>  extern bool vgic_to_sgi(struct vcpu *v, register_t sgir,

As Stefano and Andrew mentioned in patch 11/13, due to a recent patch in
staging, the upper patch failes building due to a missing declaration of
struct domain in . This can be easily fixed by adding a
forward declaration to struct domain right above
vgic_access_guest_memory in  as you will find in the
following patch.

Although this change already fixed the build on my machine, according to
Travis CI one build (XEN_TARGET_ARCH=arm64
CROSS_COMPILE=aarch64-linux-gnu- XEN_CONFIG_EXPERT=y RANDCONFIG=y
debug=n) failed due to missing information of the types paddr_t,
uint32_t, bool_t etc. Which is the reason why I have included
.

The header  already includes . However,
as  gets included from gcov.h before 
it leads to the upper missing type information issues.

Would the following changes be ok with you or shall I remove your
Acked-by in v9?



diff --git a/xen/include/asm-arm/guest_access.h
b/xen/include/asm-arm/guest_access.h
index 251e935597..8038e885f4 100644
--- a/xen/include/asm-arm/guest_access.h
+++ b/xen/include/asm-arm/guest_access.h
@@ -3,6 +3,7 @@

 #include 
 #include 
+#include 

 unsigned long raw_copy_to_guest(void *to, const void *from, unsigned len);
 unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
@@ -10,6 +11,10 @@ unsigned long raw_copy_to_guest_flush_dcache(void
*to, const void *from,
 unsigned long raw_copy_from_guest(void *to, const void *from, unsigned
len);
 unsigned long raw_clear_guest(void *to, unsigned len);

+struct domain;
+int vgic_access_guest_memory(struct domain *d, paddr_t gpa, void *buf,
+ uint32_t size, bool_t is_write);
+
 #define __raw_copy_to_guest raw_copy_to_guest
 #define __raw_copy_from_guest raw_copy_from_guest
 #define __raw_clear_guest raw_clear_guest
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index d4ed23df28..e489d0bf21 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -217,9 +217,6 @@ extern void register_vgic_ops(struct domain *d,
const struct vgic_ops *ops);
 int vgic_v2_init(struct domain *d, int *mmio_count);
 int vgic_v3_init(struct domain *d, int *mmio_count);

-int vgic_access_guest_memory(struct domain *d, paddr_t gpa, void *buf,
- uint32_t size, bool_t is_write);
-
 extern int domain_vgic_register(struct domain *d, int *mmio_count);
 extern int vcpu_vgic_free(struct vcpu *v);
 extern bool vgic_to_sgi(struct vcpu *v, register_t sgir,
--
2.13.3



Thanks,
~Sergej


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v8 11/13] arm/mem_access: Add long-descriptor based gpt

2017-08-16 Thread Sergej Proskurin
Hi all,

On 08/16/2017 12:28 AM, Andrew Cooper wrote:
> On 15/08/2017 23:25, Stefano Stabellini wrote:
>> On Tue, 15 Aug 2017, Julien Grall wrote:
>>> On 14/08/17 22:03, Sergej Proskurin wrote:
>>>> Hi Julien,
>>>>
>>>> On 08/14/2017 07:37 PM, Julien Grall wrote:
>>>>> Hi Sergej,
>>>>>
>>>>> On 09/08/17 09:20, Sergej Proskurin wrote:
>>>>>> +/*
>>>>>> + * According to to ARM DDI 0487B.a J1-5927, we return an error if
>>>>>> the found
>>>>> Please drop one of the 'to'. The rest looks good to me.
>>>>>
>>>> Great, thanks. I will remove the second "to" in v9. Would that be an
>>>> Acked-by or shall I tag this patch with a Reviewed-by you?
>>> Acked-by. FIY, you still missing an acked from "The REST" for patch #7, the
>>> rest looks fully acked.
>> I acked patch #7, but patch #8 breaks the build on ARM:
>>
>>
>> In file included from 
>> /local/repos/xen-upstream/xen/include/xen/guest_access.h:10:0,
>>  from device_tree.c:15:
>> /local/repos/xen-upstream/xen/include/asm/guest_access.h:14:32: error: 
>> 'struct domain' declared inside parameter list [-Werror]
>> uint32_t size, bool_t is_write);
>> ^
>> /local/repos/xen-upstream/xen/include/asm/guest_access.h:14:32: error: its 
>> scope is only this definition or declaration, which is probably not what you 
>> want [-Werror]
>> cc1: all warnings being treated as errors
>> make[4]: *** [device_tree.o] Error 1
>>
>>
>> Am I missing anything?
> Possibly a result of Wei's recent patch
> http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=de62402a9c2e403b049aa238b4fa4e2d618e8870
> which is newer than the posting of this series.
>

Thank you for bringing that up. Since Wei has removed a forward
declaration to struct domain in , my patch series failed to
build right after rebasing to staging. By following Wei's approach,
adding a forward declaration to struct domain in 
fixes the upper issue. I will address this issue separately in patch 08/13.

Thanks,
~Sergej


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v8 07/13] arm/mem_access: Introduce GENMASK_ULL bit operation

2017-08-15 Thread Sergej Proskurin
Hi all,

On 08/09/2017 10:20 AM, Sergej Proskurin wrote:
> The current implementation of GENMASK is capable of creating bitmasks of
> 32-bit values on AArch32 and 64-bit values on AArch64. As we need to
> create masks for 64-bit values on AArch32 as well, in this commit we
> introduce the GENMASK_ULL bit operation. Please note that the
> GENMASK_ULL implementation has been lifted from the linux kernel source
> code.
> 

As all other patches of this patch series have been Acked, I would like
to friendly remind "The REST" maintainers to provide me with your
opinion/review on this particular patch. Thank you very much in advance :)

> Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
> ---
> Cc: Andrew Cooper <andrew.coop...@citrix.com>
> Cc: George Dunlap <george.dun...@eu.citrix.com>
> Cc: Ian Jackson <ian.jack...@eu.citrix.com>
> Cc: Jan Beulich <jbeul...@suse.com>
> Cc: Julien Grall <julien.gr...@arm.com>
> Cc: Konrad Rzeszutek Wilk <konrad.w...@oracle.com>
> Cc: Stefano Stabellini <sstabell...@kernel.org>
> Cc: Tim Deegan <t...@xen.org>
> Cc: Wei Liu <wei.l...@citrix.com>
> ---
> v6: As similar patches have been already submitted and NACKED in the
> past, we resubmit this patch with 'THE REST' maintainers in Cc to
> discuss whether this patch shall be applied into common or put into
> ARM related code.
> 
> v7: Change the introduced macro BITS_PER_LONG_LONG to BITS_PER_LLONG.
> 
> Define BITS_PER_LLONG also in asm-x86/config.h in order to allow
> global usage of the introduced macro GENMASK_ULL.
> 
> Remove previously unintended whitespace elimination in the function
> get_bitmask_order as it is not the right patch to address cleanup.
> ---
>  xen/include/asm-arm/config.h | 2 ++
>  xen/include/asm-x86/config.h | 2 ++
>  xen/include/xen/bitops.h | 3 +++
>  3 files changed, 7 insertions(+)
> 
> diff --git a/xen/include/asm-arm/config.h b/xen/include/asm-arm/config.h
> index 5b6f3c985d..7da94698e1 100644
> --- a/xen/include/asm-arm/config.h
> +++ b/xen/include/asm-arm/config.h
> @@ -19,6 +19,8 @@
>  #define BITS_PER_LONG (BYTES_PER_LONG << 3)
>  #define POINTER_ALIGN BYTES_PER_LONG
>  
> +#define BITS_PER_LLONG 64
> +
>  /* xen_ulong_t is always 64 bits */
>  #define BITS_PER_XEN_ULONG 64
>  
> diff --git a/xen/include/asm-x86/config.h b/xen/include/asm-x86/config.h
> index bc0730fd9d..8b1de07dbc 100644
> --- a/xen/include/asm-x86/config.h
> +++ b/xen/include/asm-x86/config.h
> @@ -15,6 +15,8 @@
>  #define BITS_PER_BYTE 8
>  #define POINTER_ALIGN BYTES_PER_LONG
>  
> +#define BITS_PER_LLONG 64
> +
>  #define BITS_PER_XEN_ULONG BITS_PER_LONG
>  
>  #define CONFIG_PAGING_ASSISTANCE 1
> diff --git a/xen/include/xen/bitops.h b/xen/include/xen/bitops.h
> index bd0883ab22..e2019b02a3 100644
> --- a/xen/include/xen/bitops.h
> +++ b/xen/include/xen/bitops.h
> @@ -10,6 +10,9 @@
>  #define GENMASK(h, l) \
>  (((~0UL) << (l)) & (~0UL >> (BITS_PER_LONG - 1 - (h
>  
> +#define GENMASK_ULL(h, l) \
> +(((~0ULL) << (l)) & (~0ULL >> (BITS_PER_LLONG - 1 - (h
> +
>  /*
>   * ffs: find first bit set. This is defined the same way as
>   * the libc and compiler builtin ffs routines, therefore


Cheers,
~Sergej

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v8 11/13] arm/mem_access: Add long-descriptor based gpt

2017-08-15 Thread Sergej Proskurin
Hi Julien,

On 08/15/2017 12:13 PM, Julien Grall wrote:
> 
> 
> On 14/08/17 22:03, Sergej Proskurin wrote:
>> Hi Julien,
>>
>> On 08/14/2017 07:37 PM, Julien Grall wrote:
>>> Hi Sergej,
>>>
>>> On 09/08/17 09:20, Sergej Proskurin wrote:
>>>> +/*
>>>> + * According to to ARM DDI 0487B.a J1-5927, we return an error if
>>>> the found
>>>
>>> Please drop one of the 'to'. The rest looks good to me.
>>>
>>
>> Great, thanks. I will remove the second "to" in v9. Would that be an
>> Acked-by or shall I tag this patch with a Reviewed-by you?
> 
> Acked-by. FIY, you still missing an acked from "The REST" for patch #7,
> the rest looks fully acked.
> 

Yea, I know. I will ping The REST maintainers again.

Thanks,
~Sergej

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v8 11/13] arm/mem_access: Add long-descriptor based gpt

2017-08-14 Thread Sergej Proskurin
Hi Julien,

On 08/14/2017 07:37 PM, Julien Grall wrote:
> Hi Sergej,
> 
> On 09/08/17 09:20, Sergej Proskurin wrote:
>> +/*
>> + * According to to ARM DDI 0487B.a J1-5927, we return an error if
>> the found
> 
> Please drop one of the 'to'. The rest looks good to me.
> 

Great, thanks. I will remove the second "to" in v9. Would that be an
Acked-by or shall I tag this patch with a Reviewed-by you?

Thanks,
~Sergej

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v8 11/13] arm/mem_access: Add long-descriptor based gpt

2017-08-09 Thread Sergej Proskurin
This commit adds functionality to walk the guest's page tables using the
long-descriptor translation table format for both ARMv7 and ARMv8.
Similar to the hardware architecture, the implementation supports
different page granularities (4K, 16K, and 64K). The implementation is
based on ARM DDI 0487B.a J1-5922, J1-5999, and ARM DDI 0406C.b B3-1510.

Note that the current implementation lacks support for Large VA/PA on
ARMv8.2 architectures (LVA/LPA, 52-bit virtual and physical address
sizes). The associated location in the code is marked appropriately.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v2: Use TCR_SZ_MASK instead of TTBCR_SZ_MASK for ARM 32-bit guests using
the long-descriptor translation table format.

Cosmetic fixes.

v3: Move the implementation to ./xen/arch/arm/guest_copy.c.

Remove the array strides and declare the array grainsizes as static
const instead of just const to reduce the function stack overhead.

Move parts of the funtion guest_walk_ld into the static functions
get_ttbr_and_gran_64bit and get_top_bit to reduce complexity.

Use the macro BIT(x) instead of (1UL << x).

Add more comments && Cosmetic fixes.

v4: Move functionality responsible for determining the configured IPA
output-size into a separate function get_ipa_output_size. In this
function, we remove the previously used switch statement, which was
responsible for distinguishing between different IPA output-sizes.
Instead, we retrieve the information from the introduced ipa_sizes
array.

Remove the defines GRANULE_SIZE_INDEX_* and TTBR0_VALID from
guest_walk.h. Instead, introduce the enums granule_size_index
active_ttbr directly inside of guest_walk.c so that the associated
fields don't get exported.

Adapt the function to the new parameter of type "struct vcpu *".

Remove support for 52bit IPA output-sizes entirely from this commit.

Use lpae_* helpers instead of p2m_* helpers.

Cosmetic fixes & Additional comments.

v5: Make use of the function vgic_access_guest_memory to read page table
entries in guest memory.

Invert the indeces of the arrays "offsets" and "masks" and simplify
readability by using an appropriate macro for the entries.

Remove remaining CONFIG_ARM_64 #ifdefs.

Remove the use of the macros BITS_PER_WORD and BITS_PER_DOUBLE_WORD.

Use GENMASK_ULL instead of manually creating complex masks to ease
readability.

Also, create a macro CHECK_BASE_SIZE which simply reduces the code
size and simplifies readability.

Make use of the newly introduced lpae_page macro in the if-statement
to test for invalid/reserved mappings in the L3 PTE.

Cosmetic fixes and additional comments.

v6: Convert the macro CHECK_BASE_SIZE into a helper function
check_base_size. The use of the old CHECK_BASE_SIZE was confusing as
it affected the control-flow through a return as part of the macro.

Return the value -EFAULT instead of -EINVAL if access to the guest's
memory fails.

Simplify the check in the end of the table walk that ensures that
the found PTE is a page or a superpage. The new implementation
checks if the pte maps a valid page or a superpage and returns an
-EFAULT only if both conditions are not true.

Adjust the type of the array offsets to paddr_t instead of vaddr_t
to allow working with the changed *_table_offset_* helpers, which
return offsets of type paddr_t.

Make use of renamed function access_guest_memory_by_ipa instead of
vgic_access_guest_memory.

v7: Change the return type of check_base_size to bool as it returns only
two possible values and the caller is interested only whether the call
has succeeded or not.

Use a mask for the computation of the IPA, as the lower values of
the PTE's base address do not need to be zeroed out.

Cosmetic fixes in comments.

v8: By calling access_guest_memory_by_ipa in guest_walk_(ld|sd), we rely
on the p2m->lock (rw_lock) to be recursive. To avoid bugs in the
future implementation, we add a comment in struct p2m_domain to
address this case. Thus, we make the future implementation aware of
the nested use of the lock.
---
 xen/arch/arm/guest_walk.c | 398 +-
 xen/include/asm-arm/p2m.h |   8 +-
 2 files changed, 403 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/guest_walk.c b/xen/arch/arm/guest_walk.c
index 78badc2949..c6441ab2f8 100644
--- a/xen/arch/arm/guest_walk.c
+++ b/xen/arch/arm/guest_walk.c
@@ -15,7 +15,10 @@
  * this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include 
 #include 
+#include 
+#include 
 
 /*
  * The function guest_walk_sd translates a given GVA into an IPA using the
@@ -33,6 +36,174

[Xen-devel] [PATCH v8 12/13] arm/mem_access: Add short-descriptor based gpt

2017-08-09 Thread Sergej Proskurin
This commit adds functionality to walk the guest's page tables using the
short-descriptor translation table format for both ARMv7 and ARMv8. The
implementation is based on ARM DDI 0487B-a J1-6002 and ARM DDI 0406C-b
B3-1506.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
Acked-by: Julien Grall <julien.gr...@arm.com>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v3: Move the implementation to ./xen/arch/arm/guest_copy.c.

Use defines instead of hardcoded values.

Cosmetic fixes & Added more coments.

v4: Adjusted the names of short-descriptor data-types.

Adapt the function to the new parameter of type "struct vcpu *".

Cosmetic fixes.

v5: Make use of the function vgic_access_guest_memory read page table
entries in guest memory. At the same time, eliminate the offsets
array, as there is no need for an array. Instead, we apply the
associated masks to compute the GVA offsets directly in the code.

Use GENMASK to compute complex masks to ease code readability.

Use the type uint32_t for the TTBR register.

Make use of L2DESC_{SMALL|LARGE}_PAGE_SHIFT instead of
PAGE_SHIFT_{4K|64K} macros.

Remove {L1|L2}DESC_* defines from this commit.

Add comments and cosmetic fixes.

v6: Remove the variable level from the function guest_walk_sd as it is a
left-over from previous commits and is not used anymore.

Remove the falsely added issue that applied the mask to the gva
using the %-operator in the L1DESC_PAGE_TABLE case. Instead, use the
&-operator as it should have been done in the first place.

Make use of renamed function access_guest_memory_by_ipa instead of
vgic_access_guest_memory.

v7: Added Acked-by Julien Grall.

v8: We cast pte.*.base to paddr_t to cope with C type promotion of
types smaller than int. Otherwise pte.*.base would be casted to
int and subsequently sign extended, thus leading to a wrong value.
---
 xen/arch/arm/guest_walk.c | 147 +-
 1 file changed, 145 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/guest_walk.c b/xen/arch/arm/guest_walk.c
index c6441ab2f8..7f34a2b1d3 100644
--- a/xen/arch/arm/guest_walk.c
+++ b/xen/arch/arm/guest_walk.c
@@ -19,6 +19,7 @@
 #include 
 #include 
 #include 
+#include 
 
 /*
  * The function guest_walk_sd translates a given GVA into an IPA using the
@@ -31,8 +32,150 @@ static int guest_walk_sd(const struct vcpu *v,
  vaddr_t gva, paddr_t *ipa,
  unsigned int *perms)
 {
-/* Not implemented yet. */
-return -EFAULT;
+int ret;
+bool disabled = true;
+uint32_t ttbr;
+paddr_t mask, paddr;
+short_desc_t pte;
+register_t ttbcr = READ_SYSREG(TCR_EL1);
+unsigned int n = ttbcr & TTBCR_N_MASK;
+struct domain *d = v->domain;
+
+mask = GENMASK_ULL(31, (32 - n));
+
+if ( n == 0 || !(gva & mask) )
+{
+/*
+ * Use TTBR0 for GVA to IPA translation.
+ *
+ * Note that on AArch32, the TTBR0_EL1 register is 32-bit wide.
+ * Nevertheless, we have to use the READ_SYSREG64 macro, as it is
+ * required for reading TTBR0_EL1.
+ */
+ttbr = READ_SYSREG64(TTBR0_EL1);
+
+/* If TTBCR.PD0 is set, translations using TTBR0 are disabled. */
+disabled = ttbcr & TTBCR_PD0;
+}
+else
+{
+/*
+ * Use TTBR1 for GVA to IPA translation.
+ *
+ * Note that on AArch32, the TTBR1_EL1 register is 32-bit wide.
+ * Nevertheless, we have to use the READ_SYSREG64 macro, as it is
+ * required for reading TTBR1_EL1.
+ */
+ttbr = READ_SYSREG64(TTBR1_EL1);
+
+/* If TTBCR.PD1 is set, translations using TTBR1 are disabled. */
+disabled = ttbcr & TTBCR_PD1;
+
+/*
+ * TTBR1 translation always works like n==0 TTBR0 translation (ARM DDI
+ * 0487B.a J1-6003).
+ */
+n = 0;
+}
+
+if ( disabled )
+return -EFAULT;
+
+/*
+ * The address of the L1 descriptor for the initial lookup has the
+ * following format: [ttbr<31:14-n>:gva<31-n:20>:00] (ARM DDI 0487B.a
+ * J1-6003). Note that the following GPA computation already considers that
+ * the first level address translation might comprise up to four
+ * consecutive pages and does not need to be page-aligned if n > 2.
+ */
+mask = GENMASK(31, (14 - n));
+paddr = (ttbr & mask);
+
+mask = GENMASK((31 - n), 20);
+paddr |= (gva & mask) >> 18;
+
+/* Access the guest's memory to read only one PTE. */
+ret = access_guest_memory_by_ipa(d, paddr, , sizeof(short_desc_t), 
false);
+if ( ret )
+return -EINVAL;
+
+switch ( pte.walk.dt )
+{
+case L1DESC_INVALID:
+return -EFAULT;
+
+case L1DESC_PAG

[Xen-devel] [PATCH v8 05/13] arm/mem_access: Introduce GV2M_EXEC permission

2017-08-09 Thread Sergej Proskurin
We extend the current implementation by an additional permission,
GV2M_EXEC, which will be used to describe execute permissions of PTE's
as part of our guest translation table walk implementation.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
Acked-by: Julien Grall <julien.gr...@arm.com>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
 xen/include/asm-arm/page.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index cef2f28914..b8d641bfaf 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -90,6 +90,7 @@
 /* Flags for get_page_from_gva, gvirt_to_maddr etc */
 #define GV2M_READ  (0u<<0)
 #define GV2M_WRITE (1u<<0)
+#define GV2M_EXEC  (1u<<1)
 
 #ifndef __ASSEMBLY__
 
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v8 06/13] arm/mem_access: Introduce BIT_ULL bit operation

2017-08-09 Thread Sergej Proskurin
We introduce the BIT_ULL macro to using values of unsigned long long as
to enable setting bits of 64-bit registers on AArch32.  In addition,
this commit adds a define holding the register width of 64 bit
double-word registers. This define simplifies using the associated
constants in the following commits.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
Reviewed-by: Julien Grall <julien.gr...@arm.com>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v4: We reused the previous commit with the msg "arm/mem_access: Add
defines holding the width of 32/64bit regs" from v3, as we can reuse
the already existing define BITS_PER_WORD.

v5: Introduce a new macro BIT_ULL instead of changing the type of the
macro BIT.

Remove the define BITS_PER_DOUBLE_WORD.

v6: Add Julien Grall's Reviewed-by.
---
 xen/include/asm-arm/bitops.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/include/asm-arm/bitops.h b/xen/include/asm-arm/bitops.h
index bda889841b..1cbfb9edb2 100644
--- a/xen/include/asm-arm/bitops.h
+++ b/xen/include/asm-arm/bitops.h
@@ -24,6 +24,7 @@
 #define BIT(nr) (1UL << (nr))
 #define BIT_MASK(nr)(1UL << ((nr) % BITS_PER_WORD))
 #define BIT_WORD(nr)((nr) / BITS_PER_WORD)
+#define BIT_ULL(nr) (1ULL << (nr))
 #define BITS_PER_BYTE   8
 
 #define ADDR (*(volatile int *) addr)
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v8 13/13] arm/mem_access: Walk the guest's pt in software

2017-08-09 Thread Sergej Proskurin
In this commit, we make use of the gpt walk functionality introduced in
the previous commits. If mem_access is active, hardware-based gva to ipa
translation might fail, as gva_to_ipa uses the guest's translation
tables, access to which might be restricted by the active VTTBR. To
side-step potential translation errors in the function
p2m_mem_access_check_and_get_page due to restricted memory (e.g. to the
guest's page tables themselves), we walk the guest's page tables in
software.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
Acked-by: Tamas K Lengyel <ta...@tklengyel.com>
---
Cc: Razvan Cojocaru <rcojoc...@bitdefender.com>
Cc: Tamas K Lengyel <ta...@tklengyel.com>
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v2: Check the returned access rights after walking the guest's page tables in
the function p2m_mem_access_check_and_get_page.

v3: Adapt Function names and parameter.

v4: Comment why we need to fail if the permission flags that are
requested by the caller do not satisfy the mapped page.

Cosmetic fix that simplifies the if-statement checking for the
GV2M_WRITE permission.

v5: Move comment to ease code readability.
---
 xen/arch/arm/mem_access.c | 31 ++-
 1 file changed, 30 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/mem_access.c b/xen/arch/arm/mem_access.c
index e0888bbad2..3e2bb4088a 100644
--- a/xen/arch/arm/mem_access.c
+++ b/xen/arch/arm/mem_access.c
@@ -22,6 +22,7 @@
 #include 
 #include 
 #include 
+#include 
 
 static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
 xenmem_access_t *access)
@@ -101,6 +102,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned 
long flag,
   const struct vcpu *v)
 {
 long rc;
+unsigned int perms;
 paddr_t ipa;
 gfn_t gfn;
 mfn_t mfn;
@@ -110,8 +112,35 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned 
long flag,
 struct p2m_domain *p2m = p2m_get_hostp2m(v->domain);
 
 rc = gva_to_ipa(gva, , flag);
+
+/*
+ * In case mem_access is active, hardware-based gva_to_ipa translation
+ * might fail. Since gva_to_ipa uses the guest's translation tables, access
+ * to which might be restricted by the active VTTBR, we perform a gva to
+ * ipa translation in software.
+ */
 if ( rc < 0 )
-goto err;
+{
+/*
+ * The software gva to ipa translation can still fail, e.g., if the gva
+ * is not mapped.
+ */
+if ( guest_walk_tables(v, gva, , ) < 0 )
+goto err;
+
+/*
+ * Check permissions that are assumed by the caller. For instance in
+ * case of guestcopy, the caller assumes that the translated page can
+ * be accessed with requested permissions. If this is not the case, we
+ * should fail.
+ *
+ * Please note that we do not check for the GV2M_EXEC permission. Yet,
+ * since the hardware-based translation through gva_to_ipa does not
+ * test for execute permissions this check can be left out.
+ */
+if ( (flag & GV2M_WRITE) && !(perms & GV2M_WRITE) )
+goto err;
+}
 
 gfn = gaddr_to_gfn(ipa);
 
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v8 10/13] arm/mem_access: Add software guest-page-table walk

2017-08-09 Thread Sergej Proskurin
The function p2m_mem_access_check_and_get_page in mem_access.c
translates a gva to an ipa by means of the hardware functionality of the
ARM architecture. This is implemented in the function gva_to_ipa. If
mem_access is active, hardware-based gva to ipa translation might fail,
as gva_to_ipa uses the guest's translation tables, access to which might
be restricted by the active VTTBR. To address this issue, in this commit
we add a software-based guest-page-table walk, which will be used by the
function p2m_mem_access_check_and_get_page perform the gva to ipa
translation in software in one of the following commits.

Note: The introduced function guest_walk_tables assumes that the domain,
the gva of which is to be translated, is running on the currently active
vCPU. To walk the guest's page tables on a different vCPU, the following
registers would need to be loaded: TCR_EL1, TTBR0_EL1, TTBR1_EL1, and
SCTLR_EL1.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
Acked-by: Julien Grall <julien.gr...@arm.com>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v2: Rename p2m_gva_to_ipa to p2m_walk_gpt and move it to p2m.c.

Move the functionality responsible for walking long-descriptor based
translation tables out of the function p2m_walk_gpt. Also move out
the long-descriptor based translation out of this commit.

Change function parameters in order to return access access rights
to a requested gva.

Cosmetic fixes.

v3: Rename the introduced functions to guest_walk_(tables|sd|ld) and
move the implementation to guest_copy.(c|h).

Set permissions in guest_walk_tables also if the MMU is disabled.

Change the function parameter of type "struct p2m_domain *" to
"struct vcpu *" in the function guest_walk_tables.

v4: Change the function parameter of type "struct p2m_domain *" to
"struct vcpu *" in the functions guest_walk_(sd|ld) as well.

v5: Merge two if-statements in guest_walk_tables to ease readability.

Set perms to GV2M_READ as to avoid undefined permissions.

Add Julien Grall's Acked-by.

v6: Adjusted change-log of v5.

Remove Julien Grall's Acked-by as we have changed the initialization
of perms. This needs to be reviewed.

Comment why we initialize perms with GV2M_READ by default. This is
due to the fact that in the current implementation we assume a GVA
to IPA translation with EL1 privileges. Since, valid mappings in the
first stage address translation table are readable by default for
EL1, we initialize perms with GV2M_READ and extend the permissions
according to the particular page table walk.

v7: Add Acked-by Julien Grall.
---
 xen/arch/arm/Makefile|  1 +
 xen/arch/arm/guest_walk.c| 99 
 xen/include/asm-arm/guest_walk.h | 19 
 3 files changed, 119 insertions(+)
 create mode 100644 xen/arch/arm/guest_walk.c
 create mode 100644 xen/include/asm-arm/guest_walk.h

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 49e1fb2f84..282d2c2949 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -21,6 +21,7 @@ obj-$(CONFIG_HAS_GICV3) += gic-v3.o
 obj-$(CONFIG_HAS_ITS) += gic-v3-its.o
 obj-$(CONFIG_HAS_ITS) += gic-v3-lpi.o
 obj-y += guestcopy.o
+obj-y += guest_walk.o
 obj-y += hvm.o
 obj-y += io.o
 obj-y += irq.o
diff --git a/xen/arch/arm/guest_walk.c b/xen/arch/arm/guest_walk.c
new file mode 100644
index 00..78badc2949
--- /dev/null
+++ b/xen/arch/arm/guest_walk.c
@@ -0,0 +1,99 @@
+/*
+ * Guest page table walk
+ * Copyright (c) 2017 Sergej Proskurin <prosku...@sec.in.tum.de>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include 
+
+/*
+ * The function guest_walk_sd translates a given GVA into an IPA using the
+ * short-descriptor translation table format in software. This function assumes
+ * that the domain is running on the currently active vCPU. To walk the guest's
+ * page table on a different vCPU, the following registers would need to be
+ * loaded: TCR_EL1, TTBR0_EL1, TTBR1_EL1, and SCTLR_EL1.
+ */
+static int guest_walk_sd(const struct vcpu *v,
+ vaddr_t gva, paddr_t *ipa,
+ unsigned int *perms)
+{
+/* Not implemented yet. */
+return -EFAULT;
+}
+
+/*
+ * The function guest_walk_ld 

[Xen-devel] [PATCH v8 07/13] arm/mem_access: Introduce GENMASK_ULL bit operation

2017-08-09 Thread Sergej Proskurin
The current implementation of GENMASK is capable of creating bitmasks of
32-bit values on AArch32 and 64-bit values on AArch64. As we need to
create masks for 64-bit values on AArch32 as well, in this commit we
introduce the GENMASK_ULL bit operation. Please note that the
GENMASK_ULL implementation has been lifted from the linux kernel source
code.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Andrew Cooper <andrew.coop...@citrix.com>
Cc: George Dunlap <george.dun...@eu.citrix.com>
Cc: Ian Jackson <ian.jack...@eu.citrix.com>
Cc: Jan Beulich <jbeul...@suse.com>
Cc: Julien Grall <julien.gr...@arm.com>
Cc: Konrad Rzeszutek Wilk <konrad.w...@oracle.com>
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Tim Deegan <t...@xen.org>
Cc: Wei Liu <wei.l...@citrix.com>
---
v6: As similar patches have been already submitted and NACKED in the
past, we resubmit this patch with 'THE REST' maintainers in Cc to
discuss whether this patch shall be applied into common or put into
ARM related code.

v7: Change the introduced macro BITS_PER_LONG_LONG to BITS_PER_LLONG.

Define BITS_PER_LLONG also in asm-x86/config.h in order to allow
global usage of the introduced macro GENMASK_ULL.

Remove previously unintended whitespace elimination in the function
get_bitmask_order as it is not the right patch to address cleanup.
---
 xen/include/asm-arm/config.h | 2 ++
 xen/include/asm-x86/config.h | 2 ++
 xen/include/xen/bitops.h | 3 +++
 3 files changed, 7 insertions(+)

diff --git a/xen/include/asm-arm/config.h b/xen/include/asm-arm/config.h
index 5b6f3c985d..7da94698e1 100644
--- a/xen/include/asm-arm/config.h
+++ b/xen/include/asm-arm/config.h
@@ -19,6 +19,8 @@
 #define BITS_PER_LONG (BYTES_PER_LONG << 3)
 #define POINTER_ALIGN BYTES_PER_LONG
 
+#define BITS_PER_LLONG 64
+
 /* xen_ulong_t is always 64 bits */
 #define BITS_PER_XEN_ULONG 64
 
diff --git a/xen/include/asm-x86/config.h b/xen/include/asm-x86/config.h
index bc0730fd9d..8b1de07dbc 100644
--- a/xen/include/asm-x86/config.h
+++ b/xen/include/asm-x86/config.h
@@ -15,6 +15,8 @@
 #define BITS_PER_BYTE 8
 #define POINTER_ALIGN BYTES_PER_LONG
 
+#define BITS_PER_LLONG 64
+
 #define BITS_PER_XEN_ULONG BITS_PER_LONG
 
 #define CONFIG_PAGING_ASSISTANCE 1
diff --git a/xen/include/xen/bitops.h b/xen/include/xen/bitops.h
index bd0883ab22..e2019b02a3 100644
--- a/xen/include/xen/bitops.h
+++ b/xen/include/xen/bitops.h
@@ -10,6 +10,9 @@
 #define GENMASK(h, l) \
 (((~0UL) << (l)) & (~0UL >> (BITS_PER_LONG - 1 - (h
 
+#define GENMASK_ULL(h, l) \
+(((~0ULL) << (l)) & (~0ULL >> (BITS_PER_LLONG - 1 - (h
+
 /*
  * ffs: find first bit set. This is defined the same way as
  * the libc and compiler builtin ffs routines, therefore
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v8 04/13] arm/mem_access: Add short-descriptor pte typedefs and macros

2017-08-09 Thread Sergej Proskurin
The current implementation does not provide appropriate types for
short-descriptor translation table entries. As such, this commit adds new
types, which simplify managing the respective translation table entries.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
Acked-by: Julien Grall <julien.gr...@arm.com>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v3: Add more short-descriptor related pte typedefs that will be used by
the following commits.

v4: Move short-descriptor pte typedefs out of page.h into short-desc.h.

Change the type unsigned int to bool of every bitfield in
short-descriptor related data-structures that holds only one bit.

Change the typedef names from pte_sd_* to short_desc_*.

v5: Add {L1|L2}DESC_* defines to this commit.

v6: Add Julien Grall's Acked-by.
---
 xen/include/asm-arm/short-desc.h | 130 +++
 1 file changed, 130 insertions(+)
 create mode 100644 xen/include/asm-arm/short-desc.h

diff --git a/xen/include/asm-arm/short-desc.h b/xen/include/asm-arm/short-desc.h
new file mode 100644
index 00..9652a103c4
--- /dev/null
+++ b/xen/include/asm-arm/short-desc.h
@@ -0,0 +1,130 @@
+#ifndef __ARM_SHORT_DESC_H__
+#define __ARM_SHORT_DESC_H__
+
+/*
+ * First level translation table descriptor types used by the AArch32
+ * short-descriptor translation table format.
+ */
+#define L1DESC_INVALID  (0)
+#define L1DESC_PAGE_TABLE   (1)
+#define L1DESC_SECTION  (2)
+#define L1DESC_SECTION_PXN  (3)
+
+/* Defines for section and supersection shifts. */
+#define L1DESC_SECTION_SHIFT(20)
+#define L1DESC_SUPERSECTION_SHIFT   (24)
+#define L1DESC_SUPERSECTION_EXT_BASE1_SHIFT (32)
+#define L1DESC_SUPERSECTION_EXT_BASE2_SHIFT (36)
+
+/* Second level translation table descriptor types. */
+#define L2DESC_INVALID  (0)
+
+/* Defines for small (4K) and large page (64K) shifts. */
+#define L2DESC_SMALL_PAGE_SHIFT (12)
+#define L2DESC_LARGE_PAGE_SHIFT (16)
+
+/*
+ * Comprises bits of the level 1 short-descriptor format representing
+ * a section.
+ */
+typedef struct __packed {
+bool pxn:1; /* Privileged Execute Never */
+bool sec:1; /* == 1 if section or supersection */
+bool b:1;   /* Bufferable */
+bool c:1;   /* Cacheable */
+bool xn:1;  /* Execute Never */
+unsigned int dom:4; /* Domain field */
+bool impl:1;/* Implementation defined */
+unsigned int ap:2;  /* AP[1:0] */
+unsigned int tex:3; /* TEX[2:0] */
+bool ro:1;  /* AP[2] */
+bool s:1;   /* Shareable */
+bool ng:1;  /* Non-global */
+bool supersec:1;/* Must be 0 for sections */
+bool ns:1;  /* Non-secure */
+unsigned int base:12;   /* Section base address */
+} short_desc_l1_sec_t;
+
+/*
+ * Comprises bits of the level 1 short-descriptor format representing
+ * a supersection.
+ */
+typedef struct __packed {
+bool pxn:1; /* Privileged Execute Never */
+bool sec:1; /* == 1 if section or supersection */
+bool b:1;   /* Bufferable */
+bool c:1;   /* Cacheable */
+bool xn:1;  /* Execute Never */
+unsigned int extbase2:4;/* Extended base address, PA[39:36] */
+bool impl:1;/* Implementation defined */
+unsigned int ap:2;  /* AP[1:0] */
+unsigned int tex:3; /* TEX[2:0] */
+bool ro:1;  /* AP[2] */
+bool s:1;   /* Shareable */
+bool ng:1;  /* Non-global */
+bool supersec:1;/* Must be 0 for sections */
+bool ns:1;  /* Non-secure */
+unsigned int extbase1:4;/* Extended base address, PA[35:32] */
+unsigned int base:8;/* Supersection base address */
+} short_desc_l1_supersec_t;
+
+/*
+ * Comprises bits of the level 2 short-descriptor format representing
+ * a small page.
+ */
+typedef struct __packed {
+bool xn:1;  /* Execute Never */
+bool page:1;/* ==1 if small page */
+bool b:1;   /* Bufferable */
+bool c:1;   /* Cacheable */
+unsigned int ap:2;  /* AP[1:0] */
+unsigned int tex:3; /* TEX[2:0] */
+bool ro:1;  /* AP[2] */
+bool s:1;   /* Shareable */
+bool ng:1;  /* Non-global */
+unsigned int base:20;   /* Small page base address */
+} short_desc_l2_page_t;
+
+/*
+ * Comprises bits of the level 2 short-descriptor format representing
+ * a large page.
+ */
+typedef struct __pa

[Xen-devel] [PATCH v8 03/13] arm/lpae: Introduce lpae_is_page helper

2017-08-09 Thread Sergej Proskurin
This commit introduces a new helper that checks whether the target PTE
holds a page mapping or not. This helper will be used as part of the
following commits.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
Reviewed-by: Julien Grall <julien.gr...@arm.com>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v6: Change the name of the lpae_page helper to lpae_is_page.

Add Julien Grall's Reviewed-by.
---
 xen/include/asm-arm/lpae.h | 5 +
 1 file changed, 5 insertions(+)

diff --git a/xen/include/asm-arm/lpae.h b/xen/include/asm-arm/lpae.h
index efec493313..118ee5ae1a 100644
--- a/xen/include/asm-arm/lpae.h
+++ b/xen/include/asm-arm/lpae.h
@@ -153,6 +153,11 @@ static inline bool lpae_is_superpage(lpae_t pte, unsigned 
int level)
 return (level < 3) && lpae_mapping(pte);
 }
 
+static inline bool lpae_is_page(lpae_t pte, unsigned int level)
+{
+return (level == 3) && lpae_valid(pte) && pte.walk.table;
+}
+
 /*
  * AArch64 supports pages with different sizes (4K, 16K, and 64K). To enable
  * page table walks for various configurations, the following helpers enable
-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v8 08/13] arm/guest_access: Move vgic_access_guest_memory to guest_access.h

2017-08-09 Thread Sergej Proskurin
This commit moves the function vgic_access_guest_memory to guestcopy.c
and the header asm/guest_access.h. No functional changes are made.
Please note that the function will be renamed in the following commit.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
Acked-by: Julien Grall <julien.gr...@arm.com>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v6: We added this patch to our patch series.

v7: Add Acked-by Julien Grall.
---
 xen/arch/arm/guestcopy.c   | 50 ++
 xen/arch/arm/vgic-v3-its.c |  1 +
 xen/arch/arm/vgic.c| 49 -
 xen/include/asm-arm/guest_access.h |  3 +++
 xen/include/asm-arm/vgic.h |  3 ---
 5 files changed, 54 insertions(+), 52 deletions(-)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index 413125f02b..938ffe2668 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -118,6 +118,56 @@ unsigned long raw_copy_from_guest(void *to, const void 
__user *from, unsigned le
 }
 return 0;
 }
+
+/*
+ * Temporarily map one physical guest page and copy data to or from it.
+ * The data to be copied cannot cross a page boundary.
+ */
+int vgic_access_guest_memory(struct domain *d, paddr_t gpa, void *buf,
+ uint32_t size, bool is_write)
+{
+struct page_info *page;
+uint64_t offset = gpa & ~PAGE_MASK;  /* Offset within the mapped page */
+p2m_type_t p2mt;
+void *p;
+
+/* Do not cross a page boundary. */
+if ( size > (PAGE_SIZE - offset) )
+{
+printk(XENLOG_G_ERR "d%d: vITS: memory access would cross page 
boundary\n",
+   d->domain_id);
+return -EINVAL;
+}
+
+page = get_page_from_gfn(d, paddr_to_pfn(gpa), , P2M_ALLOC);
+if ( !page )
+{
+printk(XENLOG_G_ERR "d%d: vITS: Failed to get table entry\n",
+   d->domain_id);
+return -EINVAL;
+}
+
+if ( !p2m_is_ram(p2mt) )
+{
+put_page(page);
+printk(XENLOG_G_ERR "d%d: vITS: memory used by the ITS should be RAM.",
+   d->domain_id);
+return -EINVAL;
+}
+
+p = __map_domain_page(page);
+
+if ( is_write )
+memcpy(p + offset, buf, size);
+else
+memcpy(buf, p + offset, size);
+
+unmap_domain_page(p);
+put_page(page);
+
+return 0;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index 9ef792f479..1af6820cab 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -39,6 +39,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 1e5107b9f8..7a4e3cdc88 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -638,55 +638,6 @@ void vgic_free_virq(struct domain *d, unsigned int virq)
 }
 
 /*
- * Temporarily map one physical guest page and copy data to or from it.
- * The data to be copied cannot cross a page boundary.
- */
-int vgic_access_guest_memory(struct domain *d, paddr_t gpa, void *buf,
- uint32_t size, bool is_write)
-{
-struct page_info *page;
-uint64_t offset = gpa & ~PAGE_MASK;  /* Offset within the mapped page */
-p2m_type_t p2mt;
-void *p;
-
-/* Do not cross a page boundary. */
-if ( size > (PAGE_SIZE - offset) )
-{
-printk(XENLOG_G_ERR "d%d: vITS: memory access would cross page 
boundary\n",
-   d->domain_id);
-return -EINVAL;
-}
-
-page = get_page_from_gfn(d, paddr_to_pfn(gpa), , P2M_ALLOC);
-if ( !page )
-{
-printk(XENLOG_G_ERR "d%d: vITS: Failed to get table entry\n",
-   d->domain_id);
-return -EINVAL;
-}
-
-if ( !p2m_is_ram(p2mt) )
-{
-put_page(page);
-printk(XENLOG_G_ERR "d%d: vITS: memory used by the ITS should be RAM.",
-   d->domain_id);
-return -EINVAL;
-}
-
-p = __map_domain_page(page);
-
-if ( is_write )
-memcpy(p + offset, buf, size);
-else
-memcpy(buf, p + offset, size);
-
-unmap_domain_page(p);
-put_page(page);
-
-return 0;
-}
-
-/*
  * Local variables:
  * mode: C
  * c-file-style: "BSD"
diff --git a/xen/include/asm-arm/guest_access.h 
b/xen/include/asm-arm/guest_access.h
index 251e935597..49716501a4 100644
--- a/xen/include/asm-arm/guest_access.h
+++ b/xen/include/asm-arm/guest_access.h
@@ -10,6 +10,9 @@ unsigned long raw_copy_to_guest_flush_dcache(void *to, const 
void *from,
 unsigned long raw_copy_from_guest(void *to, const void *from, unsigned len);
 unsigned long raw_clear_guest(void *to, unsigned len);
 
+int vgic_access_guest_memory(struct domain *d, paddr_t gpa, void *buf,
+   

[Xen-devel] [PATCH v8 01/13] arm/mem_access: Add and cleanup (TCR_|TTBCR_)* defines

2017-08-09 Thread Sergej Proskurin
This commit adds (TCR_|TTBCR_)* defines to simplify access to the
respective register contents. At the same time, we adjust the macros
TCR_T0SZ and TCR_TG0_* by using the newly introduced TCR_T0SZ_SHIFT and
TCR_TG0_SHIFT instead of the hardcoded values.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
Acked-by: Julien Grall <julien.gr...@arm.com>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v2: Define TCR_SZ_MASK in a way so that it can be also applied to 32-bit guests
using the long-descriptor translation table format.

Extend the previous commit by further defines allowing a simplified access
to the registers TCR_EL1 and TTBCR.

v3: Replace the hardcoded value 0 in the TCR_T0SZ macro with the newly
introduced TCR_T0SZ_SHIFT. Also, replace the hardcoded value 14 in
the TCR_TG0_* macros with the introduced TCR_TG0_SHIFT.

Comment when to apply the defines TTBCR_PD(0|1), according to ARM
DDI 0487B.a and ARM DDI 0406C.b.

Remove TCR_TB_* defines.

Comment when certain TCR_EL2 register fields can be applied.

v4: Cosmetic changes.

v5: Remove the shift by 0 of the TCR_SZ_MASK as it can be applied to
both TCR_T0SZ and TCR_T1SZ (which reside at different offsets).

Adjust commit message to make clear that we do not only add but also
cleanup some TCR_* defines.

v6: Changed the comment of TCR_SZ_MASK as we falsely referenced a
section instead of a page.

Add Julien Grall's Acked-by.
---
 xen/include/asm-arm/processor.h | 69 ++---
 1 file changed, 65 insertions(+), 4 deletions(-)

diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
index 855ded1b07..898160ce00 100644
--- a/xen/include/asm-arm/processor.h
+++ b/xen/include/asm-arm/processor.h
@@ -94,6 +94,13 @@
 #define TTBCR_N_2KB  _AC(0x03,U)
 #define TTBCR_N_1KB  _AC(0x04,U)
 
+/*
+ * TTBCR_PD(0|1) can be applied only if LPAE is disabled, i.e., TTBCR.EAE==0
+ * (ARM DDI 0487B.a G6-5203 and ARM DDI 0406C.b B4-1722).
+ */
+#define TTBCR_PD0   (_AC(1,U)<<4)
+#define TTBCR_PD1   (_AC(1,U)<<5)
+
 /* SCTLR System Control Register. */
 /* HSCTLR is a subset of this. */
 #define SCTLR_TE(_AC(1,U)<<30)
@@ -154,7 +161,20 @@
 
 /* TCR: Stage 1 Translation Control */
 
-#define TCR_T0SZ(x) ((x)<<0)
+#define TCR_T0SZ_SHIFT  (0)
+#define TCR_T1SZ_SHIFT  (16)
+#define TCR_T0SZ(x) ((x)<<TCR_T0SZ_SHIFT)
+
+/*
+ * According to ARM DDI 0487B.a, TCR_EL1.{T0SZ,T1SZ} (AArch64, page D7-2480)
+ * comprises 6 bits and TTBCR.{T0SZ,T1SZ} (AArch32, page G6-5204) comprises 3
+ * bits following another 3 bits for RES0. Thus, the mask for both registers
+ * should be 0x3f.
+ */
+#define TCR_SZ_MASK (_AC(0x3f,UL))
+
+#define TCR_EPD0(_AC(0x1,UL)<<7)
+#define TCR_EPD1(_AC(0x1,UL)<<23)
 
 #define TCR_IRGN0_NC(_AC(0x0,UL)<<8)
 #define TCR_IRGN0_WBWA  (_AC(0x1,UL)<<8)
@@ -170,9 +190,50 @@
 #define TCR_SH0_OS  (_AC(0x2,UL)<<12)
 #define TCR_SH0_IS  (_AC(0x3,UL)<<12)
 
-#define TCR_TG0_4K  (_AC(0x0,UL)<<14)
-#define TCR_TG0_64K (_AC(0x1,UL)<<14)
-#define TCR_TG0_16K (_AC(0x2,UL)<<14)
+/* Note that the fields TCR_EL1.{TG0,TG1} are not available on AArch32. */
+#define TCR_TG0_SHIFT   (14)
+#define TCR_TG0_MASK(_AC(0x3,UL)<<TCR_TG0_SHIFT)
+#define TCR_TG0_4K  (_AC(0x0,UL)<<TCR_TG0_SHIFT)
+#define TCR_TG0_64K (_AC(0x1,UL)<<TCR_TG0_SHIFT)
+#define TCR_TG0_16K (_AC(0x2,UL)<<TCR_TG0_SHIFT)
+
+/* Note that the field TCR_EL2.TG1 exists only if HCR_EL2.E2H==1. */
+#define TCR_EL1_TG1_SHIFT   (30)
+#define TCR_EL1_TG1_MASK(_AC(0x3,UL)<<TCR_EL1_TG1_SHIFT)
+#define TCR_EL1_TG1_16K (_AC(0x1,UL)<<TCR_EL1_TG1_SHIFT)
+#define TCR_EL1_TG1_4K  (_AC(0x2,UL)<<TCR_EL1_TG1_SHIFT)
+#define TCR_EL1_TG1_64K (_AC(0x3,UL)<<TCR_EL1_TG1_SHIFT)
+
+/*
+ * Note that the field TCR_EL1.IPS is not available on AArch32. Also, the field
+ * TCR_EL2.IPS exists only if HCR_EL2.E2H==1.
+ */
+#define TCR_EL1_IPS_SHIFT   (32)
+#define TCR_EL1_IPS_MASK(_AC(0x7,ULL)<<TCR_EL1_IPS_SHIFT)
+#define TCR_EL1_IPS_32_BIT  (_AC(0x0,ULL)<<TCR_EL1_IPS_SHIFT)
+#define TCR_EL1_IPS_36_BIT  (_AC(0x1,ULL)<<TCR_EL1_IPS_SHIFT)
+#define TCR_EL1_IPS_40_BIT  (_AC(0x2,ULL)<<TCR_EL1_IPS_SHIFT)
+#define TCR_EL1_IPS_42_BIT  (_AC(0x3,ULL)<<TCR_EL1_IPS_SHIFT)
+#define TCR_EL1_IPS_44_BIT  (_AC(0x4,ULL)<<TCR_EL1_IPS_SHIFT)
+#define TCR_EL1_IPS_48_BIT  (_AC(0x5,ULL)<<TCR_EL1_IPS_SHIFT)
+#define TCR_EL1_IPS_52_BIT  (_AC(0x6,ULL)<<TCR_EL1_IPS_SHIFT)
+
+/*
+ * The following values correspond to the bit masks represented by
+ * TCR_EL1_IPS_XX_BIT defines.
+ */
+#define TCR_EL1_IPS_32_BIT_VAL  (32)
+#define TCR_EL1_IPS_36_BIT_VAL  (36)
+#define TCR_EL1_IPS_40_BIT_VAL  (40)
+#define TCR_EL1_IPS

[Xen-devel] [PATCH v8 09/13] arm/guest_access: Rename vgic_access_guest_memory

2017-08-09 Thread Sergej Proskurin
This commit renames the function vgic_access_guest_memory to
access_guest_memory_by_ipa. As the function name suggests, the functions
expects an IPA as argument. All invocations of this function have been
adapted accordingly. Apart from that, we have adjusted all printk
messages for cleanup and to eliminate artefacts of the function's
previous location.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v6: We added this patch to our patch series.

v7: Renamed the function's argument ipa back to gpa.

Removed any mentioning of "vITS" in the function's printk messages
and adjusted the commit message accordingly.
---
 xen/arch/arm/guestcopy.c   | 10 +-
 xen/arch/arm/vgic-v3-its.c | 36 ++--
 xen/include/asm-arm/guest_access.h |  4 ++--
 3 files changed, 25 insertions(+), 25 deletions(-)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index 938ffe2668..4ee07fcea3 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -123,8 +123,8 @@ unsigned long raw_copy_from_guest(void *to, const void 
__user *from, unsigned le
  * Temporarily map one physical guest page and copy data to or from it.
  * The data to be copied cannot cross a page boundary.
  */
-int vgic_access_guest_memory(struct domain *d, paddr_t gpa, void *buf,
- uint32_t size, bool is_write)
+int access_guest_memory_by_ipa(struct domain *d, paddr_t gpa, void *buf,
+   uint32_t size, bool is_write)
 {
 struct page_info *page;
 uint64_t offset = gpa & ~PAGE_MASK;  /* Offset within the mapped page */
@@ -134,7 +134,7 @@ int vgic_access_guest_memory(struct domain *d, paddr_t gpa, 
void *buf,
 /* Do not cross a page boundary. */
 if ( size > (PAGE_SIZE - offset) )
 {
-printk(XENLOG_G_ERR "d%d: vITS: memory access would cross page 
boundary\n",
+printk(XENLOG_G_ERR "d%d: guestcopy: memory access crosses page 
boundary.\n",
d->domain_id);
 return -EINVAL;
 }
@@ -142,7 +142,7 @@ int vgic_access_guest_memory(struct domain *d, paddr_t gpa, 
void *buf,
 page = get_page_from_gfn(d, paddr_to_pfn(gpa), , P2M_ALLOC);
 if ( !page )
 {
-printk(XENLOG_G_ERR "d%d: vITS: Failed to get table entry\n",
+printk(XENLOG_G_ERR "d%d: guestcopy: failed to get table entry.\n",
d->domain_id);
 return -EINVAL;
 }
@@ -150,7 +150,7 @@ int vgic_access_guest_memory(struct domain *d, paddr_t gpa, 
void *buf,
 if ( !p2m_is_ram(p2mt) )
 {
 put_page(page);
-printk(XENLOG_G_ERR "d%d: vITS: memory used by the ITS should be RAM.",
+printk(XENLOG_G_ERR "d%d: guestcopy: guest memory should be RAM.\n",
d->domain_id);
 return -EINVAL;
 }
diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index 1af6820cab..72a5c70656 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -131,9 +131,9 @@ static int its_set_collection(struct virt_its *its, 
uint16_t collid,
 if ( collid >= its->max_collections )
 return -ENOENT;
 
-return vgic_access_guest_memory(its->d,
-addr + collid * sizeof(coll_table_entry_t),
-_id, sizeof(vcpu_id), true);
+return access_guest_memory_by_ipa(its->d,
+  addr + collid * 
sizeof(coll_table_entry_t),
+  _id, sizeof(vcpu_id), true);
 }
 
 /* Must be called with the ITS lock held. */
@@ -149,9 +149,9 @@ static struct vcpu *get_vcpu_from_collection(struct 
virt_its *its,
 if ( collid >= its->max_collections )
 return NULL;
 
-ret = vgic_access_guest_memory(its->d,
-   addr + collid * sizeof(coll_table_entry_t),
-   _id, sizeof(coll_table_entry_t), 
false);
+ret = access_guest_memory_by_ipa(its->d,
+ addr + collid * 
sizeof(coll_table_entry_t),
+ _id, sizeof(coll_table_entry_t), 
false);
 if ( ret )
 return NULL;
 
@@ -171,9 +171,9 @@ static int its_set_itt_address(struct virt_its *its, 
uint32_t devid,
 if ( devid >= its->max_devices )
 return -ENOENT;
 
-return vgic_access_guest_memory(its->d,
-addr + devid * sizeof(dev_table_entry_t),
-_entry, sizeof(itt_entry), true);
+return access_guest_memory_by_ipa(its->d,
+  addr + devid * sizeof(dev_table_entry_t),
+  _entry, sizeof(itt_

[Xen-devel] [PATCH v8 02/13] arm/mem_access: Add defines supporting PTs with varying page sizes

2017-08-09 Thread Sergej Proskurin
AArch64 supports pages with different (4K, 16K, and 64K) sizes.  To
enable guest page table walks for various configurations, this commit
extends the defines and helpers of the current implementation.

Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
Reviewed-by: Julien Grall <julien.gr...@arm.com>
---
Cc: Stefano Stabellini <sstabell...@kernel.org>
Cc: Julien Grall <julien.gr...@arm.com>
---
v3: Eliminate redundant macro definitions by introducing generic macros.

v4: Replace existing macros with ones that generate static inline
helpers as to ease the readability of the code.

Move the introduced code into lpae.h

v5: Remove PAGE_SHIFT_* defines from lpae.h as we import them now from
the header xen/lib.h.

Remove *_guest_table_offset macros as to reduce the number of
exported macros which are only used once. Instead, use the
associated functionality directly within the
GUEST_TABLE_OFFSET_HELPERS.

Add comment in GUEST_TABLE_OFFSET_HELPERS stating that a page table
with 64K page size granularity does not have a zeroeth lookup level.

Add #undefs for GUEST_TABLE_OFFSET and GUEST_TABLE_OFFSET_HELPERS.

Remove CONFIG_ARM_64 #defines.

v6: Rename *_guest_table_offset_* helpers to *_table_offset_* as they
are sufficiently generic to be applied not only to the guest's page
table walks.

Change the type of the parameter and return value of the
*_table_offset_* helpers from vaddr_t to paddr_t to enable applying
these helpers also for other purposes such as computation of IPA
offsets in second stage translation tables.

v7: Clarify comments in the code and commit message to address AArch64
directly instead of ARMv8 in general.

Rename remaining GUEST_TABLE_* macros into TABLE_* macros, to be
consistent with *_table_offset_* helpers.

Added Reviewed-by Julien Grall.
---
 xen/include/asm-arm/lpae.h | 61 ++
 1 file changed, 61 insertions(+)

diff --git a/xen/include/asm-arm/lpae.h b/xen/include/asm-arm/lpae.h
index a62b118630..efec493313 100644
--- a/xen/include/asm-arm/lpae.h
+++ b/xen/include/asm-arm/lpae.h
@@ -3,6 +3,8 @@
 
 #ifndef __ASSEMBLY__
 
+#include 
+
 /*
  * WARNING!  Unlike the x86 pagetable code, where l1 is the lowest level and
  * l4 is the root of the trie, the ARM pagetables follow ARM's documentation:
@@ -151,6 +153,65 @@ static inline bool lpae_is_superpage(lpae_t pte, unsigned 
int level)
 return (level < 3) && lpae_mapping(pte);
 }
 
+/*
+ * AArch64 supports pages with different sizes (4K, 16K, and 64K). To enable
+ * page table walks for various configurations, the following helpers enable
+ * walking the translation table with varying page size granularities.
+ */
+
+#define LPAE_SHIFT_4K   (9)
+#define LPAE_SHIFT_16K  (11)
+#define LPAE_SHIFT_64K  (13)
+
+#define lpae_entries(gran)  (_AC(1,U) << LPAE_SHIFT_##gran)
+#define lpae_entry_mask(gran)   (lpae_entries(gran) - 1)
+
+#define third_shift(gran)   (PAGE_SHIFT_##gran)
+#define third_size(gran)((paddr_t)1 << third_shift(gran))
+
+#define second_shift(gran)  (third_shift(gran) + LPAE_SHIFT_##gran)
+#define second_size(gran)   ((paddr_t)1 << second_shift(gran))
+
+#define first_shift(gran)   (second_shift(gran) + LPAE_SHIFT_##gran)
+#define first_size(gran)((paddr_t)1 << first_shift(gran))
+
+/* Note that there is no zeroeth lookup level with a 64K granule size. */
+#define zeroeth_shift(gran) (first_shift(gran) + LPAE_SHIFT_##gran)
+#define zeroeth_size(gran)  ((paddr_t)1 << zeroeth_shift(gran))
+
+#define TABLE_OFFSET(offs, gran)  (offs & lpae_entry_mask(gran))
+#define TABLE_OFFSET_HELPERS(gran)  \
+static inline paddr_t third_table_offset_##gran##K(paddr_t va)  \
+{   \
+return TABLE_OFFSET((va >> third_shift(gran##K)), gran##K); \
+}   \
+\
+static inline paddr_t second_table_offset_##gran##K(paddr_t va) \
+{   \
+return TABLE_OFFSET((va >> second_shift(gran##K)), gran##K);\
+}   \
+\
+static inline paddr_t first_table_offset_##gran##K(paddr_t va)  \
+{   \
+return TABLE_OFFSET((va >> first_shift(gran##K)), gran##K); \
+}   \
+ 

[Xen-devel] [PATCH v8 00/13] arm/mem_access: Walk guest page tables in SW if mem_access is active

2017-08-09 Thread Sergej Proskurin
Hi all,

The function p2m_mem_access_check_and_get_page is called from the
function get_page_from_gva if mem_access is active and the
hardware-aided translation of the given guest virtual address (gva) into
machine address fails. That is, if the stage-2 translation tables
constrain access to the guests's page tables, hardware-assisted
translation will fail. The idea of the function
p2m_mem_access_check_and_get_page is thus to translate the given gva and
check the requested access rights in software. However, as the current
implementation of p2m_mem_access_check_and_get_page makes use of the
hardware-aided gva to ipa translation, the translation might also fail
because of reasons stated above and will become equally relevant for the
altp2m implementation on ARM.  As such, we provide a software guest
translation table walk to address the above mentioned issue.

The current version of the implementation supports translation of both
the short-descriptor as well as the long-descriptor translation table
format on ARMv7 and ARMv8 (AArch32/AArch64).

This revised version incorporates the comments of the previous patch
series. These comprise a comment explicitly stating the fact and
position where we recursively rely on the p2m->lock. We also add casts
to fields of the struct short_desc_t in guest_walk_sd as to cope with
incorrect values due to the C type promotion.

The following patch series can be found on Github[0].

Cheers,
~Sergej

[0] https://github.com/sergej-proskurin/xen (branch arm-gpt-walk-v8)

Sergej Proskurin (13):
  arm/mem_access: Add and cleanup (TCR_|TTBCR_)* defines
  arm/mem_access: Add defines supporting PTs with varying page sizes
  arm/lpae: Introduce lpae_is_page helper
  arm/mem_access: Add short-descriptor pte typedefs and macros
  arm/mem_access: Introduce GV2M_EXEC permission
  arm/mem_access: Introduce BIT_ULL bit operation
  arm/mem_access: Introduce GENMASK_ULL bit operation
  arm/guest_access: Move vgic_access_guest_memory to guest_access.h
  arm/guest_access: Rename vgic_access_guest_memory
  arm/mem_access: Add software guest-page-table walk
  arm/mem_access: Add long-descriptor based gpt
  arm/mem_access: Add short-descriptor based gpt
  arm/mem_access: Walk the guest's pt in software

 xen/arch/arm/Makefile  |   1 +
 xen/arch/arm/guest_walk.c  | 636 +
 xen/arch/arm/guestcopy.c   |  50 +++
 xen/arch/arm/mem_access.c  |  31 +-
 xen/arch/arm/vgic-v3-its.c |  37 +--
 xen/arch/arm/vgic.c|  49 ---
 xen/include/asm-arm/bitops.h   |   1 +
 xen/include/asm-arm/config.h   |   2 +
 xen/include/asm-arm/guest_access.h |   3 +
 xen/include/asm-arm/guest_walk.h   |  19 ++
 xen/include/asm-arm/lpae.h |  66 
 xen/include/asm-arm/p2m.h  |   8 +-
 xen/include/asm-arm/page.h |   1 +
 xen/include/asm-arm/processor.h|  69 +++-
 xen/include/asm-arm/short-desc.h   | 130 
 xen/include/asm-arm/vgic.h |   3 -
 xen/include/asm-x86/config.h   |   2 +
 xen/include/xen/bitops.h   |   3 +
 18 files changed, 1035 insertions(+), 76 deletions(-)
 create mode 100644 xen/arch/arm/guest_walk.c
 create mode 100644 xen/include/asm-arm/guest_walk.h
 create mode 100644 xen/include/asm-arm/short-desc.h

-- 
2.13.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v7 13/14] arm/mem_access: Add short-descriptor based gpt

2017-08-09 Thread Sergej Proskurin
Hi Andrew,


>>> diff --git a/xen/arch/arm/guest_walk.c b/xen/arch/arm/guest_walk.c
>>> index b258248322..7f34a2b1d3 100644
>>> --- a/xen/arch/arm/guest_walk.c
>>> +++ b/xen/arch/arm/guest_walk.c
>>> @@ -112,7 +112,12 @@ static int guest_walk_sd(const struct vcpu *v,
>>>   * level translation table does not need to be page aligned.
>>>   */
>>>  mask = GENMASK(19, 12);
>>> -paddr = (pte.walk.base << 10) | ((gva & mask) >> 10);
>>> +/*
>>> + * Cast pte.walk.base to paddr_t to cope with C type promotion
>>> of types
>>> + * smaller than int. Otherwise pte.walk.base would be casted to
>>> int and
>>> + * subsequently sign extended, thus leading to a wrong value.
>>> + */
>>> +paddr = ((paddr_t)pte.walk.base << 10) | ((gva & mask) >> 10);
>> Why not change the bitfield type from unsigned int to paddr_t ?
>>
>> The result is 100% less liable to go wrong in this way.
>>

Actually, AFAICT we would get into same troubles as before. Because of
the fact that the bitfield is smaller than an int (22 bit), it would be
first promoted to int and then we would face the same issues as we
already had.

If that is ok for you, I will resubmit the next patch without changing
the type of the bitfield. If you should not agree with me, I would
gladly discuss this issue in v8 :)

Thanks,
~Sergej


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v7 13/14] arm/mem_access: Add short-descriptor based gpt

2017-08-08 Thread Sergej Proskurin


On 08/08/2017 06:20 PM, Andrew Cooper wrote:
> On 08/08/17 16:28, Sergej Proskurin wrote:
>> On 08/08/2017 05:18 PM, Julien Grall wrote:
>>> On 08/08/17 16:17, Sergej Proskurin wrote:
>>>> Hi Julien,
>>>>
>>>>
>>>> On 07/18/2017 02:25 PM, Sergej Proskurin wrote:
>>>>> This commit adds functionality to walk the guest's page tables using
>>>>> the
>>>>> short-descriptor translation table format for both ARMv7 and ARMv8. The
>>>>> implementation is based on ARM DDI 0487B-a J1-6002 and ARM DDI 0406C-b
>>>>> B3-1506.
>>>>>
>>>>> Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
>>>>> Acked-by: Julien Grall <julien.gr...@arm.com>
>>>> As you have already Acked this patch I would like you to ask whether I
>>>> should remove your Acked-by for now as I have extended the previous
>>>> patch by additional casts of the pte.*.base fields to (paddr_t) as
>>>> discussed in patch 00/14.
>>> I am fine with this, assuming this is the only change made.
>> The changes are limited to 4 similar casts to (paddr_t) in total and an
>> additional comment. Here are the only changes in this patch:
>>
>> diff --git a/xen/arch/arm/guest_walk.c b/xen/arch/arm/guest_walk.c
>> index b258248322..7f34a2b1d3 100644
>> --- a/xen/arch/arm/guest_walk.c
>> +++ b/xen/arch/arm/guest_walk.c
>> @@ -112,7 +112,12 @@ static int guest_walk_sd(const struct vcpu *v,
>>   * level translation table does not need to be page aligned.
>>   */
>>  mask = GENMASK(19, 12);
>> -paddr = (pte.walk.base << 10) | ((gva & mask) >> 10);
>> +/*
>> + * Cast pte.walk.base to paddr_t to cope with C type promotion
>> of types
>> + * smaller than int. Otherwise pte.walk.base would be casted to
>> int and
>> + * subsequently sign extended, thus leading to a wrong value.
>> + */
>> +paddr = ((paddr_t)pte.walk.base << 10) | ((gva & mask) >> 10);
> Why not change the bitfield type from unsigned int to paddr_t ?
>
> The result is 100% less liable to go wrong in this way.
>

I absolutely agree :)

Julien, would that be ok for you if I changed the type of the base field
in short_desc_* structs accordingly? Or shall I remove your Acked-by for
this?

Thanks,
~Sergej

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v7 13/14] arm/mem_access: Add short-descriptor based gpt

2017-08-08 Thread Sergej Proskurin


On 08/08/2017 05:18 PM, Julien Grall wrote:
>
>
> On 08/08/17 16:17, Sergej Proskurin wrote:
>> Hi Julien,
>>
>>
>> On 07/18/2017 02:25 PM, Sergej Proskurin wrote:
>>> This commit adds functionality to walk the guest's page tables using
>>> the
>>> short-descriptor translation table format for both ARMv7 and ARMv8. The
>>> implementation is based on ARM DDI 0487B-a J1-6002 and ARM DDI 0406C-b
>>> B3-1506.
>>>
>>> Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
>>> Acked-by: Julien Grall <julien.gr...@arm.com>
>>
>> As you have already Acked this patch I would like you to ask whether I
>> should remove your Acked-by for now as I have extended the previous
>> patch by additional casts of the pte.*.base fields to (paddr_t) as
>> discussed in patch 00/14.
>
> I am fine with this, assuming this is the only change made.

The changes are limited to 4 similar casts to (paddr_t) in total and an
additional comment. Here are the only changes in this patch:

diff --git a/xen/arch/arm/guest_walk.c b/xen/arch/arm/guest_walk.c
index b258248322..7f34a2b1d3 100644
--- a/xen/arch/arm/guest_walk.c
+++ b/xen/arch/arm/guest_walk.c
@@ -112,7 +112,12 @@ static int guest_walk_sd(const struct vcpu *v,
  * level translation table does not need to be page aligned.
  */
 mask = GENMASK(19, 12);
-paddr = (pte.walk.base << 10) | ((gva & mask) >> 10);
+/*
+ * Cast pte.walk.base to paddr_t to cope with C type promotion
of types
+ * smaller than int. Otherwise pte.walk.base would be casted to
int and
+ * subsequently sign extended, thus leading to a wrong value.
+ */
+paddr = ((paddr_t)pte.walk.base << 10) | ((gva & mask) >> 10);

 /* Access the guest's memory to read only one PTE. */
 ret = access_guest_memory_by_ipa(d, paddr, ,
sizeof(short_desc_t), false);
@@ -125,7 +130,7 @@ static int guest_walk_sd(const struct vcpu *v,
 if ( pte.pg.page ) /* Small page. */
 {
 mask = (1ULL << L2DESC_SMALL_PAGE_SHIFT) - 1;
-*ipa = (pte.pg.base << L2DESC_SMALL_PAGE_SHIFT) | (gva & mask);
+*ipa = ((paddr_t)pte.pg.base << L2DESC_SMALL_PAGE_SHIFT) |
(gva & mask);

 /* Set execute permissions associated with the small page. */
 if ( !pte.pg.xn )
@@ -134,7 +139,7 @@ static int guest_walk_sd(const struct vcpu *v,
 else /* Large page. */
 {
 mask = (1ULL << L2DESC_LARGE_PAGE_SHIFT) - 1;
-*ipa = (pte.lpg.base << L2DESC_LARGE_PAGE_SHIFT) | (gva &
mask);
+*ipa = ((paddr_t)pte.lpg.base << L2DESC_LARGE_PAGE_SHIFT) |
(gva & mask);

 /* Set execute permissions associated with the large page. */
 if ( !pte.lpg.xn )
@@ -152,7 +157,7 @@ static int guest_walk_sd(const struct vcpu *v,
 if ( !pte.sec.supersec ) /* Section */
 {
 mask = (1ULL << L1DESC_SECTION_SHIFT) - 1;
-*ipa = (pte.sec.base << L1DESC_SECTION_SHIFT) | (gva & mask);
+*ipa = ((paddr_t)pte.sec.base << L1DESC_SECTION_SHIFT) |
(gva & mask);
 }
 else /* Supersection */
 {

Thanks,
~Sergej

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v7 13/14] arm/mem_access: Add short-descriptor based gpt

2017-08-08 Thread Sergej Proskurin
Hi Julien,


On 07/18/2017 02:25 PM, Sergej Proskurin wrote:
> This commit adds functionality to walk the guest's page tables using the
> short-descriptor translation table format for both ARMv7 and ARMv8. The
> implementation is based on ARM DDI 0487B-a J1-6002 and ARM DDI 0406C-b
> B3-1506.
>
> Signed-off-by: Sergej Proskurin <prosku...@sec.in.tum.de>
> Acked-by: Julien Grall <julien.gr...@arm.com>

As you have already Acked this patch I would like you to ask whether I
should remove your Acked-by for now as I have extended the previous
patch by additional casts of the pte.*.base fields to (paddr_t) as
discussed in patch 00/14.

Thanks,
~Sergej

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v7 00/14] arm/mem_access: Walk guest page tables in SW if mem_access is active

2017-08-08 Thread Sergej Proskurin

On 08/08/2017 04:58 PM, Andrew Cooper wrote:
> On 08/08/17 15:47, Sergej Proskurin wrote:
>> Hi Julien,
>>
>>> The patch belows solve my problem:
>>>
>>> diff --git a/xen/arch/arm/guest_walk.c b/xen/arch/arm/guest_walk.c
>>> index b258248322..6ca994e438 100644
>>> --- a/xen/arch/arm/guest_walk.c
>>> +++ b/xen/arch/arm/guest_walk.c
>>> @@ -112,7 +112,7 @@ static int guest_walk_sd(const struct vcpu *v,
>>>   * level translation table does not need to be page aligned.
>>>   */
>>>  mask = GENMASK(19, 12);
>>> -paddr = (pte.walk.base << 10) | ((gva & mask) >> 10);
>>> +paddr = ((paddr_t)pte.walk.base << 10) | ((gva & mask) >> 10);
>>>  
>>>  /* Access the guest's memory to read only one PTE. */
>>>  ret = access_guest_memory_by_ipa(d, paddr, , 
>>> sizeof(short_desc_t), false);
>>>
>>> This is because pte.walk.base is encoded on unsigned int:22 bits. A shift 
>>> by 10 will not
>>> fit an integer, and my compiler seems to promote it to "signed long long". 
>>> Hence the bogus
>>> address.
>>>
>> Thats quite an interesting phenomenon :) I have just played around with
>> this and it does indeed appear that the value is casted to a signed
>> result! What I don't yet understand is the following: An unsigned int
>> with the length of 22 bit should actually exactly fit an integer after a
>> left shift of 10 (or do I miss s.th.?).
> C type promotion ftw!
>
> All integral types smaller than int are promoted to int before any
> operations on them.  This includes things like unsigned char/short etc.
>
> Then, the type is promoted to match that of the other operand, which
> might be a wider type (e.g. long) or an unsigned version of the same type.

Thanks Andrew, I did not know that!

Cheers,
~Sergej

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v7 00/14] arm/mem_access: Walk guest page tables in SW if mem_access is active

2017-08-08 Thread Sergej Proskurin
Hi Julien,

> The patch belows solve my problem:
>
> diff --git a/xen/arch/arm/guest_walk.c b/xen/arch/arm/guest_walk.c
> index b258248322..6ca994e438 100644
> --- a/xen/arch/arm/guest_walk.c
> +++ b/xen/arch/arm/guest_walk.c
> @@ -112,7 +112,7 @@ static int guest_walk_sd(const struct vcpu *v,
>   * level translation table does not need to be page aligned.
>   */
>  mask = GENMASK(19, 12);
> -paddr = (pte.walk.base << 10) | ((gva & mask) >> 10);
> +paddr = ((paddr_t)pte.walk.base << 10) | ((gva & mask) >> 10);
>  
>  /* Access the guest's memory to read only one PTE. */
>  ret = access_guest_memory_by_ipa(d, paddr, , 
> sizeof(short_desc_t), false);
>
> This is because pte.walk.base is encoded on unsigned int:22 bits. A shift by 
> 10 will not
> fit an integer, and my compiler seems to promote it to "signed long long". 
> Hence the bogus
> address.
>


Thats quite an interesting phenomenon :) I have just played around with
this and it does indeed appear that the value is casted to a signed
result! What I don't yet understand is the following: An unsigned int
with the length of 22 bit should actually exactly fit an integer after a
left shift of 10 (or do I miss s.th.?).

Anyway, thanks for the patch! V8 containing this change will follow soon.

Thanks,
~Sergej



___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v7 00/14] arm/mem_access: Walk guest page tables in SW if mem_access is active

2017-08-08 Thread Sergej Proskurin
Hi Julien,


On 08/04/2017 11:15 AM, Sergej Proskurin wrote:
> Hi Julien,
>
> Sorry for the late reply.
>
> On 07/31/2017 04:38 PM, Julien Grall wrote:
>>
>> On 18/07/17 13:24, Sergej Proskurin wrote:
>>> Hi all,
>> Hi,
>>
>>> The function p2m_mem_access_check_and_get_page is called from the function
>>> get_page_from_gva if mem_access is active and the hardware-aided 
>>> translation of
>>> the given guest virtual address (gva) into machine address fails. That is, 
>>> if
>>> the stage-2 translation tables constrain access to the guests's page tables,
>>> hardware-assisted translation will fail. The idea of the function
>>> p2m_mem_access_check_and_get_page is thus to translate the given gva and 
>>> check
>>> the requested access rights in software. However, as the current 
>>> implementation
>>> of p2m_mem_access_check_and_get_page makes use of the hardware-aided gva to 
>>> ipa
>>> translation, the translation might also fail because of reasons stated above
>>> and will become equally relevant for the altp2m implementation on ARM.  As
>>> such, we provide a software guest translation table walk to address the 
>>> above
>>> mentioned issue.
>>>
>>> The current version of the implementation supports translation of both the
>>> short-descriptor as well as the long-descriptor translation table format on
>>> ARMv7 and ARMv8 (AArch32/AArch64).
>>>
>>> This revised version incorporates the comments of the previous patch 
>>> series. In
>>> this patch version we refine the definition of PAGE_SIZE_GRAN and
>>> PAGE_MASK_GRAN. In particular, we use PAGE_SIZE_GRAN to define 
>>> PAGE_MASK_GRAN
>>> and thus avoid these defines to have a differing type. We also changed the
>>> previously introduced macro BITS_PER_LONG_LONG to BITS_PER_LLONG. Further
>>> changes comprise minor adjustments in comments and renaming of macros and
>>> function parameters. Some additional changes comprising code readability and
>>> correct type usage have been made and stated in the individual commits.
>>>
>>> The following patch series can be found on Github[0].
>> I tried this series today with the change [1] in Xen to check the translation
>> is valid. However, I got a failure when booting non-LPAE arm32 Dom0:
>>
> That's odd.. Thanks for the information. I will investigate this issue
> next week, as soon as I have access to our ARMv7 board.
>
>> (XEN) Loading kernel from boot module @ 80008000
>> (XEN) Allocating 1:1 mappings totalling 512MB for dom0:
>> (XEN) BANK[0] 0x00a000-0x00c000 (512MB)
>> (XEN) Grant table range: 0x00ffe0-0x00ffe6a000
>> (XEN) Loading zImage from 80008000 to 
>> a780-a7f50e28
>> (XEN) Allocating PPI 16 for event channel interrupt
>> (XEN) Loading dom0 DTB to 0xa800-0xa8001f8e
>> (XEN) Std. Loglevel: All
>> (XEN) Guest Loglevel: All
>> (XEN) guest_walk_tables: gva 0xffeff018 pipa 0x1c090018
>> (XEN) access_guest_memory_by_ipa: gpa 0xa0207ff8
>> (XEN) access_guest_memory_by_ipa: gpa 0xa13aebfc
>> (XEN) d0: guestcopy: failed to get table entry.
>> (XEN) Xen BUG at traps.c:2737
>> (XEN) [ Xen-4.10-unstable  arm32  debug=y   Not tainted ]
>> (XEN) CPU:0
>> (XEN) PC: 00264dc0 do_trap_guest_sync+0x161c/0x1804
>> (XEN) CPSR:   a05a MODE:Hypervisor
>> (XEN)  R0: ffea R1:  R2:  R3: 004a
>> (XEN)  R4: 93830007 R5: 47fcff58 R6: 93830007 R7: 0007
>> (XEN)  R8: 1c09 R9:  R10: R11:47fcff54 R12:ffea
>> (XEN) HYP: SP: 47fcfee4 LR: 00258dec
>> (XEN) 
>> (XEN)   VTCR_EL2: 80003558
>> (XEN)  VTTBR_EL2: 00010008f3ffc000
>> (XEN) 
>> (XEN)  SCTLR_EL2: 30cd187f
>> (XEN)HCR_EL2: 0038663f
>> (XEN)  TTBR0_EL2: fff02000
>> (XEN) 
>> (XEN)ESR_EL2: 
>> (XEN)  HPFAR_EL2: 001c0900
>> (XEN)  HDFAR: ffeff018
>> (XEN)  HIFAR: 
>> (XEN) 
>> (XEN) Xen stack trace from sp=47fcfee4:
>> (XEN) 47fcff34 00256008 47fcfefc 47fcfefc 20da 0004 
>> 47fd48f4
>> (XEN)002d5ef0 0004 002d1f00 0004  002d1f00 c163f740 
>> 93830007
>> (XEN)ffeff018 1c090018  47fcff44 c15e70ac 005b c15e70ac 
>> c074400c
>> (XEN)0031  c0743ff8 47fcff58 00268ce0 c15e70ac 005b 
>> 0031
>>

  1   2   3   4   5   >