Re: [Android-virt] [PATCH RFC v2 3/3] ARM: KVM: Add support for MMU notifiers

2012-02-13 Thread Marc Zyngier
On 12/02/12 01:12, Christoffer Dall wrote:
 On Sat, Feb 11, 2012 at 10:33 AM, Antonios Motakis
 a.mota...@virtualopensystems.com wrote:
 On 02/11/2012 06:35 PM, Christoffer Dall wrote:

 On Sat, Feb 11, 2012 at 7:00 AM, Antonios Motakis
 a.mota...@virtualopensystems.com  wrote:

 On 02/10/2012 11:22 PM, Marc Zyngier wrote:

 +ENTRY(__kvm_tlb_flush_vmid)
 +   hvc #0  @ Switch to Hyp mode
 +   push{r2, r3}

 +   ldrdr2, r3, [r0, #KVM_VTTBR]
 +   mcrrp15, 6, r2, r3, c2  @ Write VTTBR
 +   isb
 +   mcr p15, 0, r0, c8, c7, 0   @ TBLIALL
 +   dsb
 +   isb
 +   mov r2, #0
 +   mov r3, #0
 +   mcrrp15, 6, r2, r3, c2  @ Back to VMID #0
 +   isb
 +
 +   pop {r2, r3}
 +   hvc #0  @ Back to SVC
 +   mov pc, lr
 +ENDPROC(__kvm_tlb_flush_vmid)


 With the last VMID implementation, you could get the equivalent effect of
 a
 per-VMID flush, by just getting a new VMID for the current VM. So you
 could
 do a (kvm-arch.vmid = 0) to force a new VMID when the guest reruns, and
 save the overhead of that flush (you will do a complete flush every 255
 times instead of a small one every single time).

 to do this you would need to send an IPI if the guest is currently
 executing on another CPU and make it exit the guest, so that the VMID
 assignment will run before the guest potentially accesses that TLB
 entry that points to the page that was just reclaimed - which I am not
 sure will be better than this solution.

 Don't you have to do this anyway? You'd want the flush to be effective on
 all CPUs before proceeding.
 
 hmm yeah, actually you do need this. Unless the -IS version of the
 flush instruction covers all relevant cores in this case. Marc, I
 don't think that the processor clearing out the page table entry will
 necessarily belong to the same inner-shareable domain as the processor
 potentially executing the VM, so therefore the -IS flushing version
 would not be sufficient and we actually have to go and send an IPI.

If we forget about the 11MPCore (which doesn't broadcast the TLB
invalidation in hardware), the TLBIALLIS operation makes sure all cores
belonging to the same inner shareable domain will see the TLB
invalidation at the same time. If they don't, this is a hardware bug.

Now, I do not have an example of a system where two CPUs are not part of
the same IS domain. Even big.LITTLE has all of the potential 8 cores in
an IS domain. If such a system exists one of these days, then it will be
worth considering having a separate method to cope with the case. Until
then, my opinion is to keep it as simple as possible.

M.
-- 
Jazz is not dead. It just smells funny...

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Android-virt] [PATCH RFC v2 3/3] ARM: KVM: Add support for MMU notifiers

2012-02-13 Thread Christoffer Dall
On Mon, Feb 13, 2012 at 5:13 AM, Marc Zyngier marc.zyng...@arm.com wrote:
 On 12/02/12 01:12, Christoffer Dall wrote:
 On Sat, Feb 11, 2012 at 10:33 AM, Antonios Motakis
 a.mota...@virtualopensystems.com wrote:
 On 02/11/2012 06:35 PM, Christoffer Dall wrote:

 On Sat, Feb 11, 2012 at 7:00 AM, Antonios Motakis
 a.mota...@virtualopensystems.com  wrote:

 On 02/10/2012 11:22 PM, Marc Zyngier wrote:

 +ENTRY(__kvm_tlb_flush_vmid)
 +       hvc     #0                      @ Switch to Hyp mode
 +       push    {r2, r3}

 +       ldrd    r2, r3, [r0, #KVM_VTTBR]
 +       mcrr    p15, 6, r2, r3, c2      @ Write VTTBR
 +       isb
 +       mcr     p15, 0, r0, c8, c7, 0   @ TBLIALL
 +       dsb
 +       isb
 +       mov     r2, #0
 +       mov     r3, #0
 +       mcrr    p15, 6, r2, r3, c2      @ Back to VMID #0
 +       isb
 +
 +       pop     {r2, r3}
 +       hvc     #0                      @ Back to SVC
 +       mov     pc, lr
 +ENDPROC(__kvm_tlb_flush_vmid)


 With the last VMID implementation, you could get the equivalent effect of
 a
 per-VMID flush, by just getting a new VMID for the current VM. So you
 could
 do a (kvm-arch.vmid = 0) to force a new VMID when the guest reruns, and
 save the overhead of that flush (you will do a complete flush every 255
 times instead of a small one every single time).

 to do this you would need to send an IPI if the guest is currently
 executing on another CPU and make it exit the guest, so that the VMID
 assignment will run before the guest potentially accesses that TLB
 entry that points to the page that was just reclaimed - which I am not
 sure will be better than this solution.

 Don't you have to do this anyway? You'd want the flush to be effective on
 all CPUs before proceeding.

 hmm yeah, actually you do need this. Unless the -IS version of the
 flush instruction covers all relevant cores in this case. Marc, I
 don't think that the processor clearing out the page table entry will
 necessarily belong to the same inner-shareable domain as the processor
 potentially executing the VM, so therefore the -IS flushing version
 would not be sufficient and we actually have to go and send an IPI.

 If we forget about the 11MPCore (which doesn't broadcast the TLB
 invalidation in hardware), the TLBIALLIS operation makes sure all cores
 belonging to the same inner shareable domain will see the TLB
 invalidation at the same time. If they don't, this is a hardware bug.

 Now, I do not have an example of a system where two CPUs are not part of
 the same IS domain. Even big.LITTLE has all of the potential 8 cores in
 an IS domain. If such a system exists one of these days, then it will be
 worth considering having a separate method to cope with the case. Until
 then, my opinion is to keep it as simple as possible.


ok, sounds good to me. Although, perhaps keep this as a comment somewhere...
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Android-virt] [PATCH RFC v2 3/3] ARM: KVM: Add support for MMU notifiers

2012-02-12 Thread Alexander Graf


On 12.02.2012, at 02:12, Christoffer Dall c.d...@virtualopensystems.com wrote:

 On Sat, Feb 11, 2012 at 10:33 AM, Antonios Motakis
 a.mota...@virtualopensystems.com wrote:
 On 02/11/2012 06:35 PM, Christoffer Dall wrote:
 
 On Sat, Feb 11, 2012 at 7:00 AM, Antonios Motakis
 a.mota...@virtualopensystems.com  wrote:
 
 On 02/10/2012 11:22 PM, Marc Zyngier wrote:
 
 +ENTRY(__kvm_tlb_flush_vmid)
 +   hvc #0  @ Switch to Hyp mode
 +   push{r2, r3}
 
 +   ldrdr2, r3, [r0, #KVM_VTTBR]
 +   mcrrp15, 6, r2, r3, c2  @ Write VTTBR
 +   isb
 +   mcr p15, 0, r0, c8, c7, 0   @ TBLIALL
 +   dsb
 +   isb
 +   mov r2, #0
 +   mov r3, #0
 +   mcrrp15, 6, r2, r3, c2  @ Back to VMID #0
 +   isb
 +
 +   pop {r2, r3}
 +   hvc #0  @ Back to SVC
 +   mov pc, lr
 +ENDPROC(__kvm_tlb_flush_vmid)
 
 
 With the last VMID implementation, you could get the equivalent effect of
 a
 per-VMID flush, by just getting a new VMID for the current VM. So you
 could
 do a (kvm-arch.vmid = 0) to force a new VMID when the guest reruns, and
 save the overhead of that flush (you will do a complete flush every 255
 times instead of a small one every single time).
 
 to do this you would need to send an IPI if the guest is currently
 executing on another CPU and make it exit the guest, so that the VMID
 assignment will run before the guest potentially accesses that TLB
 entry that points to the page that was just reclaimed - which I am not
 sure will be better than this solution.
 
 Don't you have to do this anyway? You'd want the flush to be effective on
 all CPUs before proceeding.
 
 hmm yeah, actually you do need this. Unless the -IS version of the
 flush instruction covers all relevant cores in this case. Marc, I
 don't think that the processor clearing out the page table entry will
 necessarily belong to the same inner-shareable domain as the processor
 potentially executing the VM, so therefore the -IS flushing version
 would not be sufficient and we actually have to go and send an IPI.
 
 So, it sounds to me like:
 1) we have to signal all vcpus using the VMID for which we are
 clearing page table entries
 2) make sure that they, either
2a) flush their TLBs
2b) get a new VMID
 
 seems like 2b might be slightly faster, but leaves more entries in the
 TLB that are then unused - not sure if that's a bad thing considering
 the replacement policy. Perhaps 2a is cleaner...

X86 basically does 2b, but has per-cpu tlb tags.

On PPC, we statically map the guest id to a guest atm.


Alex

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Android-virt] [PATCH RFC v2 3/3] ARM: KVM: Add support for MMU notifiers

2012-02-11 Thread Antonios Motakis

On 02/10/2012 11:22 PM, Marc Zyngier wrote:

+ENTRY(__kvm_tlb_flush_vmid)
+   hvc #0  @ Switch to Hyp mode
+   push{r2, r3}

+   ldrdr2, r3, [r0, #KVM_VTTBR]
+   mcrrp15, 6, r2, r3, c2  @ Write VTTBR
+   isb
+   mcr p15, 0, r0, c8, c7, 0   @ TBLIALL
+   dsb
+   isb
+   mov r2, #0
+   mov r3, #0
+   mcrrp15, 6, r2, r3, c2  @ Back to VMID #0
+   isb
+
+   pop {r2, r3}
+   hvc #0  @ Back to SVC
+   mov pc, lr
+ENDPROC(__kvm_tlb_flush_vmid)


With the last VMID implementation, you could get the equivalent effect 
of a per-VMID flush, by just getting a new VMID for the current VM. So 
you could do a (kvm-arch.vmid = 0) to force a new VMID when the guest 
reruns, and save the overhead of that flush (you will do a complete 
flush every 255 times instead of a small one every single time).


Best regards,
Antonios
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Android-virt] [PATCH RFC v2 3/3] ARM: KVM: Add support for MMU notifiers

2012-02-11 Thread Christoffer Dall
On Sat, Feb 11, 2012 at 7:00 AM, Antonios Motakis
a.mota...@virtualopensystems.com wrote:
 On 02/10/2012 11:22 PM, Marc Zyngier wrote:

 +ENTRY(__kvm_tlb_flush_vmid)
 +       hvc     #0                      @ Switch to Hyp mode
 +       push    {r2, r3}

 +       ldrd    r2, r3, [r0, #KVM_VTTBR]
 +       mcrr    p15, 6, r2, r3, c2      @ Write VTTBR
 +       isb
 +       mcr     p15, 0, r0, c8, c7, 0   @ TBLIALL
 +       dsb
 +       isb
 +       mov     r2, #0
 +       mov     r3, #0
 +       mcrr    p15, 6, r2, r3, c2      @ Back to VMID #0
 +       isb
 +
 +       pop     {r2, r3}
 +       hvc     #0                      @ Back to SVC
 +       mov     pc, lr
 +ENDPROC(__kvm_tlb_flush_vmid)


 With the last VMID implementation, you could get the equivalent effect of a
 per-VMID flush, by just getting a new VMID for the current VM. So you could
 do a (kvm-arch.vmid = 0) to force a new VMID when the guest reruns, and
 save the overhead of that flush (you will do a complete flush every 255
 times instead of a small one every single time).


to do this you would need to send an IPI if the guest is currently
executing on another CPU and make it exit the guest, so that the VMID
assignment will run before the guest potentially accesses that TLB
entry that points to the page that was just reclaimed - which I am not
sure will be better than this solution.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Android-virt] [PATCH RFC v2 3/3] ARM: KVM: Add support for MMU notifiers

2012-02-11 Thread Antonios Motakis

On 02/11/2012 06:35 PM, Christoffer Dall wrote:

On Sat, Feb 11, 2012 at 7:00 AM, Antonios Motakis
a.mota...@virtualopensystems.com  wrote:

On 02/10/2012 11:22 PM, Marc Zyngier wrote:

+ENTRY(__kvm_tlb_flush_vmid)
+   hvc #0  @ Switch to Hyp mode
+   push{r2, r3}

+   ldrdr2, r3, [r0, #KVM_VTTBR]
+   mcrrp15, 6, r2, r3, c2  @ Write VTTBR
+   isb
+   mcr p15, 0, r0, c8, c7, 0   @ TBLIALL
+   dsb
+   isb
+   mov r2, #0
+   mov r3, #0
+   mcrrp15, 6, r2, r3, c2  @ Back to VMID #0
+   isb
+
+   pop {r2, r3}
+   hvc #0  @ Back to SVC
+   mov pc, lr
+ENDPROC(__kvm_tlb_flush_vmid)


With the last VMID implementation, you could get the equivalent effect of a
per-VMID flush, by just getting a new VMID for the current VM. So you could
do a (kvm-arch.vmid = 0) to force a new VMID when the guest reruns, and
save the overhead of that flush (you will do a complete flush every 255
times instead of a small one every single time).


to do this you would need to send an IPI if the guest is currently
executing on another CPU and make it exit the guest, so that the VMID
assignment will run before the guest potentially accesses that TLB
entry that points to the page that was just reclaimed - which I am not
sure will be better than this solution.
Don't you have to do this anyway? You'd want the flush to be effective 
on all CPUs before proceeding.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Android-virt] [PATCH RFC v2 3/3] ARM: KVM: Add support for MMU notifiers

2012-02-11 Thread Christoffer Dall
On Sat, Feb 11, 2012 at 10:33 AM, Antonios Motakis
a.mota...@virtualopensystems.com wrote:
 On 02/11/2012 06:35 PM, Christoffer Dall wrote:

 On Sat, Feb 11, 2012 at 7:00 AM, Antonios Motakis
 a.mota...@virtualopensystems.com  wrote:

 On 02/10/2012 11:22 PM, Marc Zyngier wrote:

 +ENTRY(__kvm_tlb_flush_vmid)
 +       hvc     #0                      @ Switch to Hyp mode
 +       push    {r2, r3}

 +       ldrd    r2, r3, [r0, #KVM_VTTBR]
 +       mcrr    p15, 6, r2, r3, c2      @ Write VTTBR
 +       isb
 +       mcr     p15, 0, r0, c8, c7, 0   @ TBLIALL
 +       dsb
 +       isb
 +       mov     r2, #0
 +       mov     r3, #0
 +       mcrr    p15, 6, r2, r3, c2      @ Back to VMID #0
 +       isb
 +
 +       pop     {r2, r3}
 +       hvc     #0                      @ Back to SVC
 +       mov     pc, lr
 +ENDPROC(__kvm_tlb_flush_vmid)


 With the last VMID implementation, you could get the equivalent effect of
 a
 per-VMID flush, by just getting a new VMID for the current VM. So you
 could
 do a (kvm-arch.vmid = 0) to force a new VMID when the guest reruns, and
 save the overhead of that flush (you will do a complete flush every 255
 times instead of a small one every single time).

 to do this you would need to send an IPI if the guest is currently
 executing on another CPU and make it exit the guest, so that the VMID
 assignment will run before the guest potentially accesses that TLB
 entry that points to the page that was just reclaimed - which I am not
 sure will be better than this solution.

 Don't you have to do this anyway? You'd want the flush to be effective on
 all CPUs before proceeding.

hmm yeah, actually you do need this. Unless the -IS version of the
flush instruction covers all relevant cores in this case. Marc, I
don't think that the processor clearing out the page table entry will
necessarily belong to the same inner-shareable domain as the processor
potentially executing the VM, so therefore the -IS flushing version
would not be sufficient and we actually have to go and send an IPI.

So, it sounds to me like:
 1) we have to signal all vcpus using the VMID for which we are
clearing page table entries
 2) make sure that they, either
2a) flush their TLBs
2b) get a new VMID

seems like 2b might be slightly faster, but leaves more entries in the
TLB that are then unused - not sure if that's a bad thing considering
the replacement policy. Perhaps 2a is cleaner...

Thoughts anyone?
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html