[PATCH] MAINTAINERS: Update e-mail address for Christoffer Dall

2018-04-16 Thread Christoffer Dall
Update my e-mail address to a working address.

Signed-off-by: Christoffer Dall 
---
 MAINTAINERS | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index 0a1410d5a621..3e9c99d2620b 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -7738,7 +7738,7 @@ F:arch/x86/include/asm/svm.h
 F: arch/x86/kvm/svm.c
 
 KERNEL VIRTUAL MACHINE FOR ARM (KVM/arm)
-M: Christoffer Dall 
+M: Christoffer Dall 
 M: Marc Zyngier 
 L: linux-arm-ker...@lists.infradead.org (moderated for non-subscribers)
 L: kvmarm@lists.cs.columbia.edu
@@ -7752,7 +7752,7 @@ F:virt/kvm/arm/
 F: include/kvm/arm_*
 
 KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)
-M: Christoffer Dall 
+M: Christoffer Dall 
 M: Marc Zyngier 
 L: linux-arm-ker...@lists.infradead.org (moderated for non-subscribers)
 L: kvmarm@lists.cs.columbia.edu
-- 
2.14.2

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v2 14/17] kvm: arm64: Switch to per VM IPA limit

2018-04-16 Thread Suzuki K Poulose

On 13/04/18 17:27, Punit Agrawal wrote:

Hi Suzuki,

I haven't had a chance to look at the code but noticed one issue below.

Suzuki K Poulose  writes:


Now that we can manage the stage2 page table per VM, switch the
configuration details to per VM instance. We keep track of the
IPA bits, number of page table levels and the VTCR bits (which
depends on the IPA and the number of levels). While at it, remove
unused pgd_lock field from kvm_arch for arm64.

Cc: Marc Zyngier 
Cc: Christoffer Dall 
Signed-off-by: Suzuki K Poulose 
---
  arch/arm64/include/asm/kvm_host.h   | 14 --
  arch/arm64/include/asm/kvm_mmu.h| 11 +--
  arch/arm64/include/asm/stage2_pgtable.h |  1 -
  arch/arm64/kvm/hyp/switch.c |  3 +--
  virt/kvm/arm/mmu.c  |  4 
  5 files changed, 26 insertions(+), 7 deletions(-)



[...]


diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index bb458bf..e86d7f4 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -136,9 +136,10 @@ static inline unsigned long __kern_hyp_va(unsigned long v)
   */
  #define KVM_PHYS_SHIFT(40)
  
-#define kvm_phys_shift(kvm)		KVM_PHYS_SHIFT

+#define kvm_phys_shift(kvm)(kvm->arch.phys_shift)
  #define kvm_phys_size(kvm)(_AC(1, ULL) << kvm_phys_shift(kvm))
  #define kvm_phys_mask(kvm)(kvm_phys_size(kvm) - _AC(1, ULL))
+#define kvm_stage2_levels(kvm) (kvm->arch.s2_levels)
  
  static inline bool kvm_page_empty(void *ptr)

  {


[...]


diff --git a/arch/arm64/include/asm/stage2_pgtable.h 
b/arch/arm64/include/asm/stage2_pgtable.h
index 33e8ebb..9b75b83 100644
--- a/arch/arm64/include/asm/stage2_pgtable.h
+++ b/arch/arm64/include/asm/stage2_pgtable.h
@@ -44,7 +44,6 @@
   */
  #define __s2_pgd_ptrs(pa, lvls)   (1 << ((pa) - 
pt_levels_pgdir_shift((lvls
  
-#define kvm_stage2_levels(kvm)		stage2_pt_levels(kvm_phys_shift(kvm))

  #define stage2_pgdir_shift(kvm)   \
pt_levels_pgdir_shift(kvm_stage2_levels(kvm))
  #define stage2_pgdir_size(kvm)(_AC(1, UL) << 
stage2_pgdir_shift((kvm)))


[...]


diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 7a264c6..746f38e 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -753,6 +753,10 @@ int kvm_alloc_stage2_pgd(struct kvm *kvm)
return -EINVAL;
}
  
+	/* Make sure we have the stage2 configured for this VM */

+   if (WARN_ON(!kvm_stage2_levels(kvm)))
+   return -EINVAL;
+


This hunk breaks the 32-bit build as kvm_stag2_levels() isn't defined on
arm.


Thanks for spotting. I have fixed this locally in my next version to check
for the kvm_phys_shift(), as I plan to delay the levels selection to the
actual allocation of the table, giving us a fall back to increase the level
if we are unable to allocate contiguous pages (e.g, 16 * 64K pages with say 
46bit
IPA).


Cheers
Suzuki
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v2 12/17] kvm: arm/arm64: Expose supported physical address limit for VM

2018-04-16 Thread Suzuki K Poulose

On 13/04/18 14:21, Peter Maydell wrote:

On 27 March 2018 at 14:15, Suzuki K Poulose  wrote:

Expose the maximum physical address size supported by the host
for a VM. This could be later used by the userspace to choose the
appropriate size for a given VM. The limit is determined as the
minimum of actual CPU limit, the kernel limit (i.e, either 48 or 52)
and the stage2 page table support limit (which is 40bits at the moment).
For backward compatibility, we support a minimum of 40bits. The limit
will be lifted as we add support for the stage2 to support the host
kernel PA limit.

This value may be different from what is exposed to the VM via
CPU ID registers. The limit only applies to the stage2 page table.

Cc: Christoffer Dall 
Cc: Marc Zyngier 
Cc: Peter Maydel 
Signed-off-by: Suzuki K Poulose 
---
  Documentation/virtual/kvm/api.txt | 14 ++
  arch/arm/include/asm/kvm_mmu.h|  5 +
  arch/arm64/include/asm/kvm_mmu.h  |  5 +
  include/uapi/linux/kvm.h  |  6 ++
  virt/kvm/arm/arm.c|  6 ++
  5 files changed, 36 insertions(+)

diff --git a/Documentation/virtual/kvm/api.txt 
b/Documentation/virtual/kvm/api.txt
index 792fa87..55908a8 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -3500,6 +3500,20 @@ Returns: 0 on success; -1 on error
  This ioctl can be used to unregister the guest memory region registered
  with KVM_MEMORY_ENCRYPT_REG_REGION ioctl above.

+4.113 KVM_ARM_GET_MAX_VM_PHYS_SHIFT
+Capability: basic
+Architectures: arm, arm64
+Type: system ioctl
+Parameters: none
+Returns: log2(Maximum physical address space size) supported by the
+hyperviosr.


typo: "hypervisor".



Will fix it.


+
+This ioctl can be used to identify the maximum physical address space size
+supported by the hypervisor.


Is that the physical address space on the host, or the physical
address space size we present to the guest?


It is the size of the address space we present to the guest. I will update
the documentation to make it more clear.




The returned value indicates the maximum size
+of the address that can be resolved by the stage2 translation table on
+arm/arm64. On arm64, the value is decided based on the host kernel
+configuration and the system wide safe value of ID_AA64MMFR0_EL1:PARange.
+This may not match the value exposed to the VM in CPU ID registers.


Isn't it likely to confuse the guest if we lie to it about the PA range it
sees? When would the two values differ?


On a heterogeneous system, the guest could see different values
of PARange on the same VCPU. So that is not safe for a guest at the moment.
Ideally, we should emulate the PARange to provide the system wide safe value, 
which the
guest can read.

We don't touch the emulation of PARange in the ID registers in this set.
All we do is (in the next patches) limiting the address space size provided
to the guest. May be we could update PARange to the limit imposed and emulate
the field.



Do we also need a 'set' operation, so userspace can create a VM
that has a 40 bit userspace on a CPU that supports more than that,
or does it just work?


It just works as before, creating a 40bit userspace, without any additional
steps. All we do is, allowing to create a VM with bigger address space
by specifying the size in the "type" field. The other question is, does
it really matter what a guest sees in PARange and what it is provided
with ? e.g, on my Juno, the A53's have 40bit and A57 has 44bit, while
the system uses only 40bit.

This will be true even with the new change. i.e, we don't allow a size
beyond the limit supported by all the CPUs on the system.



What's the x86 API for KVM to tell userspace about physical address
range restrictions?


From a quick look, the limit comes from cpuid (leaf 0x8008 ?). So, it
could be via the existing per-VCPU get/set_cpuid{,2}() API on x86.

Suzuki
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH] KVM: arm/arm64: Close VMID generation race

2018-04-16 Thread Marc Zyngier
On 16/04/18 11:05, Shannon Zhao wrote:
> 
> 
> On 2018/4/11 9:30, Shannon Zhao wrote:
>>
>> On 2018/4/10 23:37, Marc Zyngier wrote:

[...]

 I don't mind either way. If you can be bothered to write a proper commit
 log for this, I'll take it. What I'd really want is Shannon to indicate
 whether or not this solves the issue he was seeing.

>> I'll test Marc's patch. This will take about 3 days since it's not 100%
>> reproducible.
> Hi Marc,
> 
> I've run the test for about 4 days. The issue doesn't appear.
> So Tested-by: Shannon Zhao 

Thanks Shannon, much appreciated. I'll send the fix upstream towards the
end of the week.

Cheers,

M.
-- 
Jazz is not dead. It just smells funny...
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH] KVM: arm/arm64: Close VMID generation race

2018-04-16 Thread Shannon Zhao


On 2018/4/11 9:30, Shannon Zhao wrote:
> 
> On 2018/4/10 23:37, Marc Zyngier wrote:
>> > On 10/04/18 16:24, Mark Rutland wrote:
>>> >> On Tue, Apr 10, 2018 at 05:05:40PM +0200, Christoffer Dall wrote:
 >>> On Tue, Apr 10, 2018 at 11:51:19AM +0100, Mark Rutland wrote:
>  I think we also need to update kvm->arch.vttbr before updating
>  kvm->arch.vmid_gen, otherwise another CPU can come in, see that the
>  vmid_gen is up-to-date, jump to hyp, and program a stale VTTBR (with 
>  the
>  old VMID).
> 
>  With the smp_wmb() and update of kvm->arch.vmid_gen moved to the end 
>  of
>  the critical section, I think that works, modulo using READ_ONCE() 
>  and
>  WRITE_ONCE() to ensure single-copy-atomicity of the fields we access
>  locklessly.
 >>>
 >>> Indeed, you're right.  I would look something like this, then:
 >>>
 >>> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
 >>> index 2e43f9d42bd5..6cb08995e7ff 100644
 >>> --- a/virt/kvm/arm/arm.c
 >>> +++ b/virt/kvm/arm/arm.c
 >>> @@ -450,7 +450,9 @@ void force_vm_exit(const cpumask_t *mask)
 >>>   */
 >>>  static bool need_new_vmid_gen(struct kvm *kvm)
 >>>  {
 >>> -  return unlikely(kvm->arch.vmid_gen != 
 >>> atomic64_read(&kvm_vmid_gen));
 >>> +  u64 current_vmid_gen = atomic64_read(&kvm_vmid_gen);
 >>> +  smp_rmb(); /* Orders read of kvm_vmid_gen and kvm->arch.vmid */
 >>> +  return unlikely(READ_ONCE(kvm->arch.vmid_gen) != 
 >>> current_vmid_gen);
 >>>  }
 >>>  
 >>>  /**
 >>> @@ -500,7 +502,6 @@ static void update_vttbr(struct kvm *kvm)
 >>>kvm_call_hyp(__kvm_flush_vm_context);
 >>>}
 >>>  
 >>> -  kvm->arch.vmid_gen = atomic64_read(&kvm_vmid_gen);
 >>>kvm->arch.vmid = kvm_next_vmid;
 >>>kvm_next_vmid++;
 >>>kvm_next_vmid &= (1 << kvm_vmid_bits) - 1;
 >>> @@ -509,7 +510,10 @@ static void update_vttbr(struct kvm *kvm)
 >>>pgd_phys = virt_to_phys(kvm->arch.pgd);
 >>>BUG_ON(pgd_phys & ~VTTBR_BADDR_MASK);
 >>>vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & 
 >>> VTTBR_VMID_MASK(kvm_vmid_bits);
 >>> -  kvm->arch.vttbr = pgd_phys | vmid;
 >>> +  WRITE_ONCE(kvm->arch.vttbr, pgd_phys | vmid);
 >>> +
 >>> +  smp_wmb(); /* Ensure vttbr update is observed before vmid_gen 
 >>> update */
 >>> +  kvm->arch.vmid_gen = atomic64_read(&kvm_vmid_gen);
 >>>  
 >>>spin_unlock(&kvm_vmid_lock);
 >>>  }
>>> >>
>>> >> I think that's right, yes.
>>> >>
>>> >> We could replace the smp_{r,w}mb() barriers with an acquire of the
>>> >> kvm_vmid_gen and a release of kvm->arch.vmid_gen, but if we're really
>>> >> trying to optimize things there are larger algorithmic changes necessary
>>> >> anyhow.
>>> >>
 >>> It's probably easier to convince ourselves about the correctness of
 >>> Marc's code using a rwlock instead, though.  Thoughts?
>>> >>
>>> >> I believe that Marc's preference was the rwlock; I have no preference
>>> >> either way.
>> > 
>> > I don't mind either way. If you can be bothered to write a proper commit
>> > log for this, I'll take it. What I'd really want is Shannon to indicate
>> > whether or not this solves the issue he was seeing.
>> > 
> I'll test Marc's patch. This will take about 3 days since it's not 100%
> reproducible.
Hi Marc,

I've run the test for about 4 days. The issue doesn't appear.
So Tested-by: Shannon Zhao 

Thanks,
-- 
Shannon

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm