Re: [PATCH] ARM/ARM64: KVM: remove 'config KVM_ARM_MAX_VCPUS'

2015-09-17 Thread Ming Lei
On Wed, Sep 2, 2015 at 7:42 PM, Ming Lei <ming@canonical.com> wrote:
> On Wed, Sep 2, 2015 at 6:25 PM, Christoffer Dall
> <christoffer.d...@linaro.org> wrote:
>> On Wed, Sep 02, 2015 at 02:31:21PM +0800, Ming Lei wrote:
>>> This patch removes config option of KVM_ARM_MAX_VCPUS,
>>> and like other ARCHs, just choose the maximum allowed
>>> value from hardware, and follows the reasons:
>>>
>>> 1) from distribution view, the option has to be
>>> defined as the max allowed value because it need to
>>> meet all kinds of virtulization applications and
>>> need to support most of SoCs;
>>>
>>> 2) using a bigger value doesn't introduce extra memory
>>> consumption, and the help text in Kconfig isn't accurate
>>> because kvm_vpu structure isn't allocated until request
>>> of creating VCPU is sent from QEMU;
>>
>> This used to be true because of the vgic bitmaps, but that is now
>> dynamically allocated, so I believe you're correct in saying that the
>> text is no longer accurate.
>>
>>>
>>> 3) the main effect is that the field of vcpus[] in 'struct kvm'
>>> becomes a bit bigger(sizeof(void *) per vcpu) and need more cache
>>> lines to hold the structure, but 'struct kvm' is one generic struct,
>>> and it has worked well on other ARCHs already in this way. Also,
>>> the world switch frequecy is often low, for example, it is ~2000
>>> when running kernel building load in VM from APM xgene KVM host,
>>> so the effect is very small, and the difference can't be observed
>>> in my test at all.
>>
>> While I'm not prinicipally opposed to removing this option, I have to
>> point out that this analysis is far far over-simplified.  You have
>> chosen a workload which excercised only CPU and memory virtualization,
>> mostly solved by the hardware virtualization support, and therefore you
>> don't see many exits.
>>
>> Try running an I/O bound workload, or something which involves a lot of
>> virtual IPIs, and you'll see a higher number of exits.
>
> Yeah, the frequency of exits becomes higher(6600/sec) when I run a
> totally I/O benchmark(fio: 4 jobs, bs 4k, libaio over virtio-blk) in a
> quad-core VM, but it is still not high enough to cause any difference
> on the test result.
>
>>
>> However, I still doubt that the effects will be noticable in the grand
>> scheme of things.
>>>
>>> Cc: Dann Frazier <dann.fraz...@canonical.com>
>>> Cc: Christoffer Dall <christoffer.d...@linaro.org>
>>> Cc: Marc Zyngier <marc.zyng...@arm.com>
>>> Cc: kvm...@lists.cs.columbia.edu
>>> Cc: kvm@vger.kernel.org
>>> Signed-off-by: Ming Lei <ming@canonical.com>
>>> ---
>>>  arch/arm/include/asm/kvm_host.h   |  8 ++--
>>>  arch/arm/kvm/Kconfig  | 11 ---
>>>  arch/arm64/include/asm/kvm_host.h |  8 ++--
>>>  arch/arm64/kvm/Kconfig| 11 ---
>>>  include/kvm/arm_vgic.h|  6 +-
>>>  virt/kvm/arm/vgic-v3.c|  2 +-
>>>  6 files changed, 6 insertions(+), 40 deletions(-)
>>>
>>> diff --git a/arch/arm/include/asm/kvm_host.h 
>>> b/arch/arm/include/asm/kvm_host.h
>>> index dcba0fa..c8c226a 100644
>>> --- a/arch/arm/include/asm/kvm_host.h
>>> +++ b/arch/arm/include/asm/kvm_host.h
>>> @@ -29,12 +29,6 @@
>>>
>>>  #define __KVM_HAVE_ARCH_INTC_INITIALIZED
>>>
>>> -#if defined(CONFIG_KVM_ARM_MAX_VCPUS)
>>> -#define KVM_MAX_VCPUS CONFIG_KVM_ARM_MAX_VCPUS
>>> -#else
>>> -#define KVM_MAX_VCPUS 0
>>> -#endif
>>> -
>>>  #define KVM_USER_MEM_SLOTS 32
>>>  #define KVM_PRIVATE_MEM_SLOTS 4
>>>  #define KVM_COALESCED_MMIO_PAGE_OFFSET 1
>>> @@ -44,6 +38,8 @@
>>>
>>>  #include 
>>>
>>> +#define KVM_MAX_VCPUS VGIC_V2_MAX_CPUS
>>> +
>>>  u32 *kvm_vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num, u32 mode);
>>>  int __attribute_const__ kvm_target_cpu(void);
>>>  int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
>>> diff --git a/arch/arm/kvm/Kconfig b/arch/arm/kvm/Kconfig
>>> index bfb915d..210ecca 100644
>>> --- a/arch/arm/kvm/Kconfig
>>> +++ b/arch/arm/kvm/Kconfig
>>> @@ -45,15 +45,4 @@ config KVM_ARM_HOST
>>>   ---help---
>>> Provides host support for ARM processors.
>>>
>>> -config KVM_ARM_MAX_VCPUS
>>> - int "Number maximum supported virtual CPU

[PATCH] ARM/ARM64: KVM: remove 'config KVM_ARM_MAX_VCPUS'

2015-09-02 Thread Ming Lei
This patch removes config option of KVM_ARM_MAX_VCPUS,
and like other ARCHs, just choose the maximum allowed
value from hardware, and follows the reasons:

1) from distribution view, the option has to be
defined as the max allowed value because it need to
meet all kinds of virtulization applications and
need to support most of SoCs;

2) using a bigger value doesn't introduce extra memory
consumption, and the help text in Kconfig isn't accurate
because kvm_vpu structure isn't allocated until request
of creating VCPU is sent from QEMU;

3) the main effect is that the field of vcpus[] in 'struct kvm'
becomes a bit bigger(sizeof(void *) per vcpu) and need more cache
lines to hold the structure, but 'struct kvm' is one generic struct,
and it has worked well on other ARCHs already in this way. Also,
the world switch frequecy is often low, for example, it is ~2000
when running kernel building load in VM from APM xgene KVM host,
so the effect is very small, and the difference can't be observed
in my test at all.

Cc: Dann Frazier <dann.fraz...@canonical.com>
Cc: Christoffer Dall <christoffer.d...@linaro.org>
Cc: Marc Zyngier <marc.zyng...@arm.com>
Cc: kvm...@lists.cs.columbia.edu
Cc: kvm@vger.kernel.org
Signed-off-by: Ming Lei <ming@canonical.com>
---
 arch/arm/include/asm/kvm_host.h   |  8 ++--
 arch/arm/kvm/Kconfig  | 11 ---
 arch/arm64/include/asm/kvm_host.h |  8 ++--
 arch/arm64/kvm/Kconfig| 11 ---
 include/kvm/arm_vgic.h|  6 +-
 virt/kvm/arm/vgic-v3.c|  2 +-
 6 files changed, 6 insertions(+), 40 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index dcba0fa..c8c226a 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -29,12 +29,6 @@
 
 #define __KVM_HAVE_ARCH_INTC_INITIALIZED
 
-#if defined(CONFIG_KVM_ARM_MAX_VCPUS)
-#define KVM_MAX_VCPUS CONFIG_KVM_ARM_MAX_VCPUS
-#else
-#define KVM_MAX_VCPUS 0
-#endif
-
 #define KVM_USER_MEM_SLOTS 32
 #define KVM_PRIVATE_MEM_SLOTS 4
 #define KVM_COALESCED_MMIO_PAGE_OFFSET 1
@@ -44,6 +38,8 @@
 
 #include 
 
+#define KVM_MAX_VCPUS VGIC_V2_MAX_CPUS
+
 u32 *kvm_vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num, u32 mode);
 int __attribute_const__ kvm_target_cpu(void);
 int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
diff --git a/arch/arm/kvm/Kconfig b/arch/arm/kvm/Kconfig
index bfb915d..210ecca 100644
--- a/arch/arm/kvm/Kconfig
+++ b/arch/arm/kvm/Kconfig
@@ -45,15 +45,4 @@ config KVM_ARM_HOST
---help---
  Provides host support for ARM processors.
 
-config KVM_ARM_MAX_VCPUS
-   int "Number maximum supported virtual CPUs per VM"
-   depends on KVM_ARM_HOST
-   default 4
-   help
- Static number of max supported virtual CPUs per VM.
-
- If you choose a high number, the vcpu structures will be quite
- large, so only choose a reasonable number that you expect to
- actually use.
-
 endif # VIRTUALIZATION
diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 415938d..3fb58ea 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -30,12 +30,6 @@
 
 #define __KVM_HAVE_ARCH_INTC_INITIALIZED
 
-#if defined(CONFIG_KVM_ARM_MAX_VCPUS)
-#define KVM_MAX_VCPUS CONFIG_KVM_ARM_MAX_VCPUS
-#else
-#define KVM_MAX_VCPUS 0
-#endif
-
 #define KVM_USER_MEM_SLOTS 32
 #define KVM_PRIVATE_MEM_SLOTS 4
 #define KVM_COALESCED_MMIO_PAGE_OFFSET 1
@@ -43,6 +37,8 @@
 #include 
 #include 
 
+#define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
+
 #define KVM_VCPU_MAX_FEATURES 3
 
 int __attribute_const__ kvm_target_cpu(void);
diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
index bfffe8f..5c7e920 100644
--- a/arch/arm64/kvm/Kconfig
+++ b/arch/arm64/kvm/Kconfig
@@ -41,15 +41,4 @@ config KVM_ARM_HOST
---help---
  Provides host support for ARM processors.
 
-config KVM_ARM_MAX_VCPUS
-   int "Number maximum supported virtual CPUs per VM"
-   depends on KVM_ARM_HOST
-   default 4
-   help
- Static number of max supported virtual CPUs per VM.
-
- If you choose a high number, the vcpu structures will be quite
- large, so only choose a reasonable number that you expect to
- actually use.
-
 endif # VIRTUALIZATION
diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
index d901f1a..4e14dac 100644
--- a/include/kvm/arm_vgic.h
+++ b/include/kvm/arm_vgic.h
@@ -35,11 +35,7 @@
 #define VGIC_V3_MAX_LRS16
 #define VGIC_MAX_IRQS  1024
 #define VGIC_V2_MAX_CPUS   8
-
-/* Sanity checks... */
-#if (KVM_MAX_VCPUS > 255)
-#error Too many KVM VCPUs, the VGIC only supports up to 255 VCPUs for now
-#endif
+#define VGIC_V3_MAX_CPUS   255
 
 #if (VGIC_NR_IRQS_LEGACY & 31)
 #error "VGIC_NR_IRQS must be a multiple of 32"
diff --git a/virt/kvm/arm/vgic-v3.c b/virt/kvm/arm/vgic-v3.c
index afbf925..7dd5d62 100

Re: [PATCH] ARM/ARM64: KVM: remove 'config KVM_ARM_MAX_VCPUS'

2015-09-02 Thread Ming Lei
On Wed, Sep 2, 2015 at 6:25 PM, Christoffer Dall
<christoffer.d...@linaro.org> wrote:
> On Wed, Sep 02, 2015 at 02:31:21PM +0800, Ming Lei wrote:
>> This patch removes config option of KVM_ARM_MAX_VCPUS,
>> and like other ARCHs, just choose the maximum allowed
>> value from hardware, and follows the reasons:
>>
>> 1) from distribution view, the option has to be
>> defined as the max allowed value because it need to
>> meet all kinds of virtulization applications and
>> need to support most of SoCs;
>>
>> 2) using a bigger value doesn't introduce extra memory
>> consumption, and the help text in Kconfig isn't accurate
>> because kvm_vpu structure isn't allocated until request
>> of creating VCPU is sent from QEMU;
>
> This used to be true because of the vgic bitmaps, but that is now
> dynamically allocated, so I believe you're correct in saying that the
> text is no longer accurate.
>
>>
>> 3) the main effect is that the field of vcpus[] in 'struct kvm'
>> becomes a bit bigger(sizeof(void *) per vcpu) and need more cache
>> lines to hold the structure, but 'struct kvm' is one generic struct,
>> and it has worked well on other ARCHs already in this way. Also,
>> the world switch frequecy is often low, for example, it is ~2000
>> when running kernel building load in VM from APM xgene KVM host,
>> so the effect is very small, and the difference can't be observed
>> in my test at all.
>
> While I'm not prinicipally opposed to removing this option, I have to
> point out that this analysis is far far over-simplified.  You have
> chosen a workload which excercised only CPU and memory virtualization,
> mostly solved by the hardware virtualization support, and therefore you
> don't see many exits.
>
> Try running an I/O bound workload, or something which involves a lot of
> virtual IPIs, and you'll see a higher number of exits.

Yeah, the frequency of exits becomes higher(6600/sec) when I run a
totally I/O benchmark(fio: 4 jobs, bs 4k, libaio over virtio-blk) in a
quad-core VM, but it is still not high enough to cause any difference
on the test result.

>
> However, I still doubt that the effects will be noticable in the grand
> scheme of things.
>>
>> Cc: Dann Frazier <dann.fraz...@canonical.com>
>> Cc: Christoffer Dall <christoffer.d...@linaro.org>
>> Cc: Marc Zyngier <marc.zyng...@arm.com>
>> Cc: kvm...@lists.cs.columbia.edu
>> Cc: kvm@vger.kernel.org
>> Signed-off-by: Ming Lei <ming@canonical.com>
>> ---
>>  arch/arm/include/asm/kvm_host.h   |  8 ++--
>>  arch/arm/kvm/Kconfig  | 11 ---
>>  arch/arm64/include/asm/kvm_host.h |  8 ++--
>>  arch/arm64/kvm/Kconfig| 11 ---
>>  include/kvm/arm_vgic.h|  6 +-
>>  virt/kvm/arm/vgic-v3.c|  2 +-
>>  6 files changed, 6 insertions(+), 40 deletions(-)
>>
>> diff --git a/arch/arm/include/asm/kvm_host.h 
>> b/arch/arm/include/asm/kvm_host.h
>> index dcba0fa..c8c226a 100644
>> --- a/arch/arm/include/asm/kvm_host.h
>> +++ b/arch/arm/include/asm/kvm_host.h
>> @@ -29,12 +29,6 @@
>>
>>  #define __KVM_HAVE_ARCH_INTC_INITIALIZED
>>
>> -#if defined(CONFIG_KVM_ARM_MAX_VCPUS)
>> -#define KVM_MAX_VCPUS CONFIG_KVM_ARM_MAX_VCPUS
>> -#else
>> -#define KVM_MAX_VCPUS 0
>> -#endif
>> -
>>  #define KVM_USER_MEM_SLOTS 32
>>  #define KVM_PRIVATE_MEM_SLOTS 4
>>  #define KVM_COALESCED_MMIO_PAGE_OFFSET 1
>> @@ -44,6 +38,8 @@
>>
>>  #include 
>>
>> +#define KVM_MAX_VCPUS VGIC_V2_MAX_CPUS
>> +
>>  u32 *kvm_vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num, u32 mode);
>>  int __attribute_const__ kvm_target_cpu(void);
>>  int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
>> diff --git a/arch/arm/kvm/Kconfig b/arch/arm/kvm/Kconfig
>> index bfb915d..210ecca 100644
>> --- a/arch/arm/kvm/Kconfig
>> +++ b/arch/arm/kvm/Kconfig
>> @@ -45,15 +45,4 @@ config KVM_ARM_HOST
>>   ---help---
>> Provides host support for ARM processors.
>>
>> -config KVM_ARM_MAX_VCPUS
>> - int "Number maximum supported virtual CPUs per VM"
>> - depends on KVM_ARM_HOST
>> - default 4
>> - help
>> -   Static number of max supported virtual CPUs per VM.
>> -
>> -   If you choose a high number, the vcpu structures will be quite
>> -   large, so only choose a reasonable number that you expect to
>> -   actually use.
>> -
>>  endif # VIRTUALIZATION
>> diff --git a/arch/arm64/include/asm/kvm_host.h 
>> b/arch/arm64/include

Re: [PATCH v1] ARM/ARM64: support KVM_IOEVENTFD

2014-11-22 Thread Ming Lei
Hi Eric,

Thanks for your FYI.

On Fri, Nov 21, 2014 at 8:58 PM, Eric Auger eric.au...@linaro.org wrote:
 Hi Ming,

 for your information there is a series written by Antonios (added in CC)
 https://lists.cs.columbia.edu/pipermail/kvmarm/2014-March/008416.html
 exactly on the same topic.

 The thread was reactivated by Nikolay latterly on Nov (see
 http://www.gossamer-threads.com/lists/linux/kernel/1886716?page=last).


Looks google didn't tell me the above, :-(

 I am also convinced we must progress on ioeventfd topic concurrently

Yes, hope both two can be merged soon.

 with irqfd one. What starting point do we use then for further comments?

Please ignore this one and follow previous thread.

Thanks,
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] ARM/ARM64: support KVM_IOEVENTFD

2014-11-18 Thread Ming Lei
From Documentation/virtual/kvm/api.txt, all ARCHs should support
ioeventfd.

Also ARM VM has supported PCI bus already, and ARM64 will do too,
ioeventfd is required for some popular devices, like virtio-blk
and virtio-scsi dataplane in QEMU.

Without this patch, virtio-blk-pci dataplane can't work in QEMU.

Signed-off-by: Ming Lei ming@canonical.com
---
 arch/arm/kvm/Kconfig   |1 +
 arch/arm/kvm/Makefile  |2 +-
 arch/arm/kvm/arm.c |1 +
 arch/arm/kvm/mmio.c|   19 +++
 arch/arm64/kvm/Kconfig |1 +
 5 files changed, 23 insertions(+), 1 deletion(-)

diff --git a/arch/arm/kvm/Kconfig b/arch/arm/kvm/Kconfig
index 466bd29..25bd83a 100644
--- a/arch/arm/kvm/Kconfig
+++ b/arch/arm/kvm/Kconfig
@@ -23,6 +23,7 @@ config KVM
select HAVE_KVM_CPU_RELAX_INTERCEPT
select KVM_MMIO
select KVM_ARM_HOST
+   select HAVE_KVM_EVENTFD
depends on ARM_VIRT_EXT  ARM_LPAE
---help---
  Support hosting virtualized guest machines. You will also
diff --git a/arch/arm/kvm/Makefile b/arch/arm/kvm/Makefile
index f7057ed..859db09 100644
--- a/arch/arm/kvm/Makefile
+++ b/arch/arm/kvm/Makefile
@@ -15,7 +15,7 @@ AFLAGS_init.o := -Wa,-march=armv7-a$(plus_virt)
 AFLAGS_interrupts.o := -Wa,-march=armv7-a$(plus_virt)
 
 KVM := ../../../virt/kvm
-kvm-arm-y = $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o
+kvm-arm-y = $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o
 
 obj-y += kvm-arm.o init.o interrupts.o
 obj-y += arm.o handle_exit.o guest.o mmu.o emulate.o reset.o
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 9e193c8..d90d989 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -172,6 +172,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
case KVM_CAP_IRQCHIP:
r = vgic_present;
break;
+   case KVM_CAP_IOEVENTFD:
case KVM_CAP_DEVICE_CTRL:
case KVM_CAP_USER_MEMORY:
case KVM_CAP_SYNC_MMU:
diff --git a/arch/arm/kvm/mmio.c b/arch/arm/kvm/mmio.c
index 4cb5a93..ee332a7 100644
--- a/arch/arm/kvm/mmio.c
+++ b/arch/arm/kvm/mmio.c
@@ -162,6 +162,21 @@ static int decode_hsr(struct kvm_vcpu *vcpu, phys_addr_t 
fault_ipa,
return 0;
 }
 
+static int handle_io_bus_rw(struct kvm_vcpu *vcpu, gpa_t addr,
+   int len, void *val, bool write)
+{
+   int idx, ret;
+
+   idx = srcu_read_lock(vcpu-kvm-srcu);
+   if (write)
+   ret = kvm_io_bus_write(vcpu-kvm, KVM_MMIO_BUS, addr, len, val);
+   else
+   ret = kvm_io_bus_read(vcpu-kvm, KVM_MMIO_BUS, addr, len, val);
+   srcu_read_unlock(vcpu-kvm-srcu, idx);
+
+   return ret;
+}
+
 int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
 phys_addr_t fault_ipa)
 {
@@ -200,6 +215,10 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run 
*run,
if (vgic_handle_mmio(vcpu, run, mmio))
return 1;
 
+   if (!handle_io_bus_rw(vcpu, fault_ipa, mmio.len, mmio.data,
+   mmio.is_write))
+   return 1;
+
kvm_prepare_mmio(run, mmio);
return 0;
 }
diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
index 8ba85e9..642f57c 100644
--- a/arch/arm64/kvm/Kconfig
+++ b/arch/arm64/kvm/Kconfig
@@ -26,6 +26,7 @@ config KVM
select KVM_ARM_HOST
select KVM_ARM_VGIC
select KVM_ARM_TIMER
+   select HAVE_KVM_EVENTFD
---help---
  Support hosting virtualized guest machines.
 
-- 
1.7.9.5

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] ARM/ARM64: support KVM_IOEVENTFD

2014-11-18 Thread Ming Lei
On Tue, Nov 18, 2014 at 11:24 PM, Ming Lei ming@canonical.com wrote:
 From Documentation/virtual/kvm/api.txt, all ARCHs should support
 ioeventfd.

 Also ARM VM has supported PCI bus already, and ARM64 will do too,
 ioeventfd is required for some popular devices, like virtio-blk
 and virtio-scsi dataplane in QEMU.

 Without this patch, virtio-blk-pci dataplane can't work in QEMU.

Please ignore this one because eventfd.o is missed to build in arm64.


 Signed-off-by: Ming Lei ming@canonical.com
 ---
  arch/arm/kvm/Kconfig   |1 +
  arch/arm/kvm/Makefile  |2 +-
  arch/arm/kvm/arm.c |1 +
  arch/arm/kvm/mmio.c|   19 +++
  arch/arm64/kvm/Kconfig |1 +
  5 files changed, 23 insertions(+), 1 deletion(-)

 diff --git a/arch/arm/kvm/Kconfig b/arch/arm/kvm/Kconfig
 index 466bd29..25bd83a 100644
 --- a/arch/arm/kvm/Kconfig
 +++ b/arch/arm/kvm/Kconfig
 @@ -23,6 +23,7 @@ config KVM
 select HAVE_KVM_CPU_RELAX_INTERCEPT
 select KVM_MMIO
 select KVM_ARM_HOST
 +   select HAVE_KVM_EVENTFD
 depends on ARM_VIRT_EXT  ARM_LPAE
 ---help---
   Support hosting virtualized guest machines. You will also
 diff --git a/arch/arm/kvm/Makefile b/arch/arm/kvm/Makefile
 index f7057ed..859db09 100644
 --- a/arch/arm/kvm/Makefile
 +++ b/arch/arm/kvm/Makefile
 @@ -15,7 +15,7 @@ AFLAGS_init.o := -Wa,-march=armv7-a$(plus_virt)
  AFLAGS_interrupts.o := -Wa,-march=armv7-a$(plus_virt)

  KVM := ../../../virt/kvm
 -kvm-arm-y = $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o
 +kvm-arm-y = $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o

  obj-y += kvm-arm.o init.o interrupts.o
  obj-y += arm.o handle_exit.o guest.o mmu.o emulate.o reset.o
 diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
 index 9e193c8..d90d989 100644
 --- a/arch/arm/kvm/arm.c
 +++ b/arch/arm/kvm/arm.c
 @@ -172,6 +172,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long 
 ext)
 case KVM_CAP_IRQCHIP:
 r = vgic_present;
 break;
 +   case KVM_CAP_IOEVENTFD:
 case KVM_CAP_DEVICE_CTRL:
 case KVM_CAP_USER_MEMORY:
 case KVM_CAP_SYNC_MMU:
 diff --git a/arch/arm/kvm/mmio.c b/arch/arm/kvm/mmio.c
 index 4cb5a93..ee332a7 100644
 --- a/arch/arm/kvm/mmio.c
 +++ b/arch/arm/kvm/mmio.c
 @@ -162,6 +162,21 @@ static int decode_hsr(struct kvm_vcpu *vcpu, phys_addr_t 
 fault_ipa,
 return 0;
  }

 +static int handle_io_bus_rw(struct kvm_vcpu *vcpu, gpa_t addr,
 +   int len, void *val, bool write)
 +{
 +   int idx, ret;
 +
 +   idx = srcu_read_lock(vcpu-kvm-srcu);
 +   if (write)
 +   ret = kvm_io_bus_write(vcpu-kvm, KVM_MMIO_BUS, addr, len, 
 val);
 +   else
 +   ret = kvm_io_bus_read(vcpu-kvm, KVM_MMIO_BUS, addr, len, 
 val);
 +   srcu_read_unlock(vcpu-kvm-srcu, idx);
 +
 +   return ret;
 +}
 +
  int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
  phys_addr_t fault_ipa)
  {
 @@ -200,6 +215,10 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run 
 *run,
 if (vgic_handle_mmio(vcpu, run, mmio))
 return 1;

 +   if (!handle_io_bus_rw(vcpu, fault_ipa, mmio.len, mmio.data,
 +   mmio.is_write))
 +   return 1;
 +
 kvm_prepare_mmio(run, mmio);
 return 0;
  }
 diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
 index 8ba85e9..642f57c 100644
 --- a/arch/arm64/kvm/Kconfig
 +++ b/arch/arm64/kvm/Kconfig
 @@ -26,6 +26,7 @@ config KVM
 select KVM_ARM_HOST
 select KVM_ARM_VGIC
 select KVM_ARM_TIMER
 +   select HAVE_KVM_EVENTFD
 ---help---
   Support hosting virtualized guest machines.

 --
 1.7.9.5

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v1] ARM/ARM64: support KVM_IOEVENTFD

2014-11-18 Thread Ming Lei
From Documentation/virtual/kvm/api.txt, all ARCHs should support
ioeventfd.

Also ARM VM has supported PCI bus already, and ARM64 will do too,
ioeventfd is required for some popular devices, like virtio-blk
and virtio-scsi dataplane in QEMU.

Without this patch, virtio-blk-pci dataplane can't work in QEMU.

This patch has been tested on both ARM and ARM64.

Signed-off-by: Ming Lei ming@canonical.com
---
v1:
- make eventfd.o built in ARM64
 arch/arm/kvm/Kconfig|1 +
 arch/arm/kvm/Makefile   |2 +-
 arch/arm/kvm/arm.c  |1 +
 arch/arm/kvm/mmio.c |   19 +++
 arch/arm64/kvm/Kconfig  |1 +
 arch/arm64/kvm/Makefile |2 +-
 6 files changed, 24 insertions(+), 2 deletions(-)

diff --git a/arch/arm/kvm/Kconfig b/arch/arm/kvm/Kconfig
index 466bd29..25bd83a 100644
--- a/arch/arm/kvm/Kconfig
+++ b/arch/arm/kvm/Kconfig
@@ -23,6 +23,7 @@ config KVM
select HAVE_KVM_CPU_RELAX_INTERCEPT
select KVM_MMIO
select KVM_ARM_HOST
+   select HAVE_KVM_EVENTFD
depends on ARM_VIRT_EXT  ARM_LPAE
---help---
  Support hosting virtualized guest machines. You will also
diff --git a/arch/arm/kvm/Makefile b/arch/arm/kvm/Makefile
index f7057ed..859db09 100644
--- a/arch/arm/kvm/Makefile
+++ b/arch/arm/kvm/Makefile
@@ -15,7 +15,7 @@ AFLAGS_init.o := -Wa,-march=armv7-a$(plus_virt)
 AFLAGS_interrupts.o := -Wa,-march=armv7-a$(plus_virt)
 
 KVM := ../../../virt/kvm
-kvm-arm-y = $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o
+kvm-arm-y = $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o
 
 obj-y += kvm-arm.o init.o interrupts.o
 obj-y += arm.o handle_exit.o guest.o mmu.o emulate.o reset.o
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 9e193c8..d90d989 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -172,6 +172,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
case KVM_CAP_IRQCHIP:
r = vgic_present;
break;
+   case KVM_CAP_IOEVENTFD:
case KVM_CAP_DEVICE_CTRL:
case KVM_CAP_USER_MEMORY:
case KVM_CAP_SYNC_MMU:
diff --git a/arch/arm/kvm/mmio.c b/arch/arm/kvm/mmio.c
index 4cb5a93..ee332a7 100644
--- a/arch/arm/kvm/mmio.c
+++ b/arch/arm/kvm/mmio.c
@@ -162,6 +162,21 @@ static int decode_hsr(struct kvm_vcpu *vcpu, phys_addr_t 
fault_ipa,
return 0;
 }
 
+static int handle_io_bus_rw(struct kvm_vcpu *vcpu, gpa_t addr,
+   int len, void *val, bool write)
+{
+   int idx, ret;
+
+   idx = srcu_read_lock(vcpu-kvm-srcu);
+   if (write)
+   ret = kvm_io_bus_write(vcpu-kvm, KVM_MMIO_BUS, addr, len, val);
+   else
+   ret = kvm_io_bus_read(vcpu-kvm, KVM_MMIO_BUS, addr, len, val);
+   srcu_read_unlock(vcpu-kvm-srcu, idx);
+
+   return ret;
+}
+
 int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
 phys_addr_t fault_ipa)
 {
@@ -200,6 +215,10 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run 
*run,
if (vgic_handle_mmio(vcpu, run, mmio))
return 1;
 
+   if (!handle_io_bus_rw(vcpu, fault_ipa, mmio.len, mmio.data,
+   mmio.is_write))
+   return 1;
+
kvm_prepare_mmio(run, mmio);
return 0;
 }
diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
index 8ba85e9..642f57c 100644
--- a/arch/arm64/kvm/Kconfig
+++ b/arch/arm64/kvm/Kconfig
@@ -26,6 +26,7 @@ config KVM
select KVM_ARM_HOST
select KVM_ARM_VGIC
select KVM_ARM_TIMER
+   select HAVE_KVM_EVENTFD
---help---
  Support hosting virtualized guest machines.
 
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 32a0961..2e6b827 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -11,7 +11,7 @@ ARM=../../../arch/arm/kvm
 
 obj-$(CONFIG_KVM_ARM_HOST) += kvm.o
 
-kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o
+kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o 
$(KVM)/eventfd.o
 kvm-$(CONFIG_KVM_ARM_HOST) += $(ARM)/arm.o $(ARM)/mmu.o $(ARM)/mmio.o
 kvm-$(CONFIG_KVM_ARM_HOST) += $(ARM)/psci.o $(ARM)/perf.o
 
-- 
1.7.9.5

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: blk-mq crash under KVM in multiqueue block code (with virtio-blk and ext4)

2014-09-17 Thread Ming Lei
On Wed, Sep 17, 2014 at 3:59 PM, Christian Borntraeger
borntrae...@de.ibm.com wrote:
 On 09/12/2014 10:09 PM, Christian Borntraeger wrote:
 On 09/12/2014 01:54 PM, Ming Lei wrote:
 On Thu, Sep 11, 2014 at 6:26 PM, Christian Borntraeger
 borntrae...@de.ibm.com wrote:
 Folks,

 we have seen the following bug with 3.16 as a KVM guest. It suspect the 
 blk-mq rework that happened between 3.15 and 3.16, but it can be something 
 completely different.


 Care to share how you reproduce the issue?

 Host with 16GB RAM 32GB swap. 15 guest all with 2 GB RAM (and varying amount 
 of CPUs). All do heavy file I/O.
 It did not happen with 3.15/3.15 in guest/host and does happen with 
 3.16/3.16. So our next step is to check
 3.15/3.16 and 3.16/3.15 to identify if its host memory mgmt or guest block 
 layer.

 The crashed happen pretty randomly, but when they happen it seems that its 
 the same trace as below. This makes memory corruption by host vm less likely 
 and some thing wrong in blk-mq more likely I guess


Maybe you can try these patches because atomic op
can be reordered on S390:

http://marc.info/?l=linux-kernelm=141094730828533w=2
http://marc.info/?l=linux-kernelm=141094730828534w=2

Thanks
-- 
Ming Lei
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: blk-mq crash under KVM in multiqueue block code (with virtio-blk and ext4)

2014-09-17 Thread Ming Lei
On Wed, 17 Sep 2014 14:00:34 +0200
David Hildenbrand d...@linux.vnet.ibm.com wrote:

   Does anyone have an idea?
   The request itself is completely filled with cc
  
   That is very weird, the 'rq' is got from hctx-tags,  and rq should be
   valid, and rq-q shouldn't have been changed even though it was
   double free or double allocation.
  
   I am currently asking myself if blk_mq_map_request should protect 
   against softirq here but I cant say for sure,as I have never looked 
   into that code before.
  
   No, it needn't the protection.
  
   Thanks,
  
   
   --
   To unsubscribe from this list: send the line unsubscribe kvm in
   the body of a message to majord...@vger.kernel.org
   More majordomo info at  http://vger.kernel.org/majordomo-info.html
   
  
 
 Digging through the code, I think I found a possible cause:
 
 tags-rqs[..] is not initialized with zeroes (via alloc_pages_node in
 blk-mq.c:blk_mq_init_rq_map()).

Yes, it may cause problem when the request is allocated at the 1st time,
and timeout handler may comes just after the allocation and before its
initialization, then oops triggered because of garbage data in the request. 

--
From ffd0824b7b686074c2d5d70bc4e6bba3ba56a30c Mon Sep 17 00:00:00 2001
From: Ming Lei ming@canonical.com
Date: Wed, 17 Sep 2014 21:00:34 +0800
Subject: [PATCH] blk-mq: initialize request before the 1st allocation

Otherwise the request can be accessed from timeout handler
just after its 1st allocation from tag pool and before
initialization in blk_mq_rq_ctx_init(), so cause oops since
the request is filled up with garbage data.

Signed-off-by: Ming Lei ming@canonical.com
---
 block/blk-mq.c |   10 ++
 1 file changed, 10 insertions(+)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 4aac826..d24673f 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -514,6 +514,10 @@ struct request *blk_mq_tag_to_rq(struct blk_mq_tags *tags, 
unsigned int tag)
 {
struct request *rq = tags-rqs[tag];
 
+   /* uninitialized request */
+   if (!rq-q || rq-tag == -1)
+   return rq;
+
if (!is_flush_request(rq, tag))
return rq;
 
@@ -1401,6 +1405,12 @@ static struct blk_mq_tags *blk_mq_init_rq_map(struct 
blk_mq_tag_set *set,
left -= to_do * rq_size;
for (j = 0; j  to_do; j++) {
tags-rqs[i] = p;
+
+   /* Avoiding early access from timeout handler */
+   tags-rqs[i]-tag = -1;
+   tags-rqs[i]-q = NULL;
+   tags-rqs[i]-cmd_flags = 0;
+
if (set-ops-init_request) {
if (set-ops-init_request(set-driver_data,
tags-rqs[i], hctx_idx, i,
-- 
1.7.9.5





-- 
Ming Lei
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: blk-mq crash under KVM in multiqueue block code (with virtio-blk and ext4)

2014-09-17 Thread Ming Lei
On Wed, Sep 17, 2014 at 10:22 PM, Jens Axboe ax...@kernel.dk wrote:

 Another way would be to ensure that the timeout handler doesn't touch hw_ctx
 or tag_sets that aren't fully initialized yet. But I think this is
 safer/cleaner.

That may not be easy or enough to check if hw_ctx/tag_sets are
fully initialized if you mean all requests have been used one time.

On Wed, Sep 17, 2014 at 10:11 PM, David Hildenbrand
 I was playing with a simple patch that just sets cmd_flags and action_flags to

What is action_flags?

 0. That should already be sufficient to hinder blk_mq_tag_to_rq and the 
 calling
 method to do the wrong thing.

Yes, clearing rq-cmd_flags should be enough.

And looks better to move rq initialization to __blk_mq_free_request()
too, otherwise timeout still may see old cmd_flags and rq-q before
rq's new initialization.


Thanks,
-- 
Ming Lei
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: blk-mq crash under KVM in multiqueue block code (with virtio-blk and ext4)

2014-09-17 Thread Ming Lei
On Thu, Sep 18, 2014 at 3:09 AM, David Hildenbrand
d...@linux.vnet.ibm.com wrote:
 On Wed, Sep 17, 2014 at 10:22 PM, Jens Axboe ax...@kernel.dk wrote:
 
  Another way would be to ensure that the timeout handler doesn't touch 
  hw_ctx
  or tag_sets that aren't fully initialized yet. But I think this is
  safer/cleaner.

 That may not be easy or enough to check if hw_ctx/tag_sets are
 fully initialized if you mean all requests have been used one time.

 On Wed, Sep 17, 2014 at 10:11 PM, David Hildenbrand
  I was playing with a simple patch that just sets cmd_flags and 
  action_flags to

 What is action_flags?

 atomic_flags, sorry :)

 Otherwise e.g. REQ_ATOM_STARTED could already be set due to the randomness. I
 am not sure if this is really necessary, or if it is completely shielded by 
 the
 tag-handling code, but seemed to be clean for me to do it (and I remember it
 not being set within blk_mq_rq_ctx_init).

You are right, both cmd_flags and atomic_flags should be cleared
in blk_mq_init_rq_map().



  0. That should already be sufficient to hinder blk_mq_tag_to_rq and the 
  calling
  method to do the wrong thing.

 Yes, clearing rq-cmd_flags should be enough.

 And looks better to move rq initialization to __blk_mq_free_request()
 too, otherwise timeout still may see old cmd_flags and rq-q before
 rq's new initialization.

 Yes, __blk_mq_free_request() should also reset at least rq-cmd_flags, and I
 think we can remove the initialization from  __blk_mq_alloc_request().

 David



 Thanks,




-- 
Ming Lei
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: blk-mq crash under KVM in multiqueue block code (with virtio-blk and ext4)

2014-09-12 Thread Ming Lei
On Thu, Sep 11, 2014 at 6:26 PM, Christian Borntraeger
borntrae...@de.ibm.com wrote:
 Folks,

 we have seen the following bug with 3.16 as a KVM guest. It suspect the 
 blk-mq rework that happened between 3.15 and 3.16, but it can be something 
 completely different.


Care to share how you reproduce the issue?

 [   65.992022] Unable to handle kernel pointer dereference in virtual kernel 
 address space
 [   65.992187] failing address: d000 TEID: d803
 [   65.992363] Fault in home space mode while using kernel ASCE.
 [   65.992365] AS:00a7c007 R3:0024
 [   65.993754] Oops: 0038 [#1] SMP
 [   65.993923] Modules linked in: iscsi_tcp libiscsi_tcp libiscsi 
 scsi_transport_iscsi virtio_balloon vhost_net vhost macvtap macvlan kvm 
 dm_multipath virtio_net virtio_blk sunrpc
 [   65.994274] CPU: 0 PID: 44 Comm: kworker/u6:2 Not tainted 
 3.16.0-20140814.0.c66c84c.fc18-s390xfrob #1
 [   65.996043] Workqueue: writeback bdi_writeback_workfn (flush-251:32)
 [   65.996222] task: 0225 ti: 02258000 task.ti: 
 02258000
 [   65.996228] Krnl PSW : 0704f0018000 003ed114 
 (blk_mq_tag_to_rq+0x20/0x38)
 [   65.997299]R:0 T:1 IO:1 EX:1 Key:0 M:1 W:0 P:0 AS:3 CC:3 PM:0 
 EA:3
Krnl GPRS: 0040  01619000 
 004e
 [   65.997301]004e  0001 
 00a0de18
 [   65.997302]77ffbe18 77ffbd50 6d72d620 
 004f
 [   65.997304]01a99400 0080 003eddee 
 77ffbc28
 [   65.997864] Krnl Code: 003ed106: e3102034lg  
 %r1,48(%r2)
   003ed10c: 91082044tm  
 68(%r2),8
  #003ed110: a7840009brc 
 8,3ed122
  003ed114: e34016880004lg  
 %r4,1672(%r1)
   003ed11a: 59304100c   
 %r3,256(%r4)
   003ed11e: a7840003brc 
 8,3ed124
   003ed122: 07febcr 
 15,%r14
   003ed124: b9040024lgr 
 %r2,%r4
 [   65.998221] Call Trace:
 [   65.998224] ([0001] 0x1)
 [   65.998227]  [003f17b6] blk_mq_tag_busy_iter+0x7a/0xc4
 [   65.998228]  [003edcd6] blk_mq_rq_timer+0x96/0x13c
 [   65.999226]  [0013ee60] call_timer_fn+0x40/0x110
 [   65.999230]  [0013f642] run_timer_softirq+0x2de/0x3d0
 [   65.999238]  [00135b70] __do_softirq+0x124/0x2ac
 [   65.999241]  [00136000] irq_exit+0xc4/0xe4
 [   65.999435]  [0010bc08] do_IRQ+0x64/0x84
 [   66.437533]  [0067ccd8] ext_skip+0x42/0x46
 [   66.437541]  [003ed7b4] __blk_mq_alloc_request+0x58/0x1e8
 [   66.437544] ([003ed788] __blk_mq_alloc_request+0x2c/0x1e8)
 [   66.437547]  [003eef82] blk_mq_map_request+0xc2/0x208
 [   66.437549]  [003ef860] blk_sq_make_request+0xac/0x350
 [   66.437721]  [003e2d6c] generic_make_request+0xc4/0xfc
 [   66.437723]  [003e2e56] submit_bio+0xb2/0x1a8
 [   66.438373]  [0031e8aa] ext4_io_submit+0x52/0x80
 [   66.438375]  [0031ccfa] ext4_writepages+0x7c6/0xd0c
 [   66.438378]  [002aea20] __writeback_single_inode+0x54/0x274
 [   66.438379]  [002b0134] writeback_sb_inodes+0x28c/0x4ec
 [   66.438380]  [002b042e] __writeback_inodes_wb+0x9a/0xe4
 [   66.438382]  [002b06a2] wb_writeback+0x22a/0x358
 [   66.438383]  [002b0cd0] bdi_writeback_workfn+0x354/0x538
 [   66.438618]  [0014e3aa] process_one_work+0x1aa/0x418
 [   66.438621]  [0014ef94] worker_thread+0x48/0x524
 [   66.438625]  [001560ca] kthread+0xee/0x108
 [   66.438627]  [0067c76e] kernel_thread_starter+0x6/0xc
 [   66.438628]  [0067c768] kernel_thread_starter+0x0/0xc
 [   66.438629] Last Breaking-Event-Address:
 [   66.438631]  [003edde8] blk_mq_timeout_check+0x6c/0xb8

 I looked into the dump, and the full function is  (annotated by me to match 
 the source code)
 r2= tags
 r3= tag (4e)
 Dump of assembler code for function blk_mq_tag_to_rq:
0x003ed0f4 +0: lg  %r1,96(%r2) # r1 
 has now tags-rqs
0x003ed0fa +6: sllg%r2,%r3,3   # r2 
 has tag*8
0x003ed100 +12:lg  %r2,0(%r2,%r1)  # r2 
 now has rq (=tags-rqs[tag])
0x003ed106 +18:lg  %r1,48(%r2) # r1 
 now has rq-q
0x003ed10c +24:tm  68(%r2),8   # 
 test for rq-cmd_flags  REQ_FLUSH_SEQ
0x003ed110 +28:je  0x3ed122 blk_mq_tag_to_rq+46  #  if 
 not goto 3ed122
0x003ed114 +32:lg  %r4,1672(%r1) 

[PATCH] arm, kvm: fix double lock on cpu_add_remove_lock

2014-04-06 Thread Ming Lei
The patch of arm, kvm: Fix CPU hotplug callback registration
in -next tree holds the lock before calling the two functions:

kvm_vgic_hyp_init()
kvm_timer_hyp_init()

and both the two functions are calling register_cpu_notifier()
to register cpu notifier, so cause double lock on cpu_add_remove_lock.

Considered that both two functions are only called inside
kvm_arch_init() with holding cpu_add_remove_lock, so simply use
__register_cpu_notifier() to fix the problem.

Cc: Paolo Bonzini pbonz...@redhat.com
Cc: Christoffer Dall christoffer.d...@linaro.org
Cc: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
Cc: Rafael J. Wysocki rafael.j.wyso...@intel.com
Signed-off-by: Ming Lei tom.leim...@gmail.com
---
 virt/kvm/arm/arch_timer.c |2 +-
 virt/kvm/arm/vgic.c   |2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
index 5081e80..22fa819 100644
--- a/virt/kvm/arm/arch_timer.c
+++ b/virt/kvm/arm/arch_timer.c
@@ -277,7 +277,7 @@ int kvm_timer_hyp_init(void)
 
host_vtimer_irq = ppi;
 
-   err = register_cpu_notifier(kvm_timer_cpu_nb);
+   err = __register_cpu_notifier(kvm_timer_cpu_nb);
if (err) {
kvm_err(Cannot register timer CPU notifier\n);
goto out_free;
diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
index 8ca405c..47b2983 100644
--- a/virt/kvm/arm/vgic.c
+++ b/virt/kvm/arm/vgic.c
@@ -1496,7 +1496,7 @@ int kvm_vgic_hyp_init(void)
goto out;
}
 
-   ret = register_cpu_notifier(vgic_cpu_nb);
+   ret = __register_cpu_notifier(vgic_cpu_nb);
if (ret) {
kvm_err(Cannot register vgic CPU notifier\n);
goto out_free_irq;
-- 
1.7.9.5

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [GIT PULL] KVM/ARM for 3.15

2014-03-05 Thread Ming Lei
On Wed, Mar 5, 2014 at 5:16 PM, Anup Patel a...@brainfault.org wrote:
 On Wed, Mar 5, 2014 at 10:55 AM, Ming Lei ming@canonical.com wrote:
 On Wed, Mar 5, 2014 at 1:23 PM, Ming Lei ming@canonical.com wrote:
 On Tue, Mar 4, 2014 at 10:27 AM, Marc Zyngier marc.zyng...@arm.com

 Marc Zyngier (12):
   arm64: KVM: force cache clean on page fault when caches are off
   arm64: KVM: allows discrimination of AArch32 sysreg access
   arm64: KVM: trap VM system registers until MMU and caches are ON
   ARM: KVM: introduce kvm_p*d_addr_end
   arm64: KVM: flush VM pages before letting the guest enable caches

 I tested the first 5 patches on APM arm64 board, and only after
 applying the 5 patches, qemu can boot kernel successfully, otherwise
 kernel can't be booted from qemu.

 For the first 5 patches, please feel free to add:

 These patches are required for using KVM in presence of APM L3 cache.

 Usually, APM U-boot enables L3 cache by default hence KVM does not
 work for you without these patches.

 To have KVM working without these patches you will need to explicitly
 disable L3 cache from APM U-boot before starting Linux kernel.

Anup, thanks for your input.

We observed that when CPUs' loading is high, qemu
can launch kernel successfully on APM arm64 sometimes, so
that might be related with L3 cache.

But we did have an old kernel which can support qemu well
with same the uboot, maybe that kernel disabled L3.

From our view, these patches are required absolutely since we
need to run bootloader(UEFI, GRUB) from qemu.

Thanks,
--
Ming Lei
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [GIT PULL] KVM/ARM for 3.15

2014-03-04 Thread Ming Lei
On Tue, Mar 4, 2014 at 10:27 AM, Marc Zyngier marc.zyng...@arm.com wrote:
 Paolo, Gleb,

 Please pull the following tag to get what we currently have queued for
 3.15. This series fixes a number of issue we have with when the guest
 runs with caches off.

 Thanks,

 M.

 The following changes since commit 1b385cbdd74aa803e966e01e5fe49490d6044e30:

   kvm, vmx: Really fix lazy FPU on nested guest (2014-02-27 22:54:11 +0100)

 are available in the git repository at:

   git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git 
 tags/kvm-for-3.15-1

 for you to fetch changes up to 56041bf920d2937b7cadcb30cb206f0372eee814:

   ARM: KVM: fix warning in mmu.c (2014-03-03 01:15:25 +)

 
 This series fixes coherency issues on arm and arm64 when the guest
 runs with caches off, and fixes a couple of other bugs in the process.

 

 Marc Zyngier (12):
   arm64: KVM: force cache clean on page fault when caches are off
   arm64: KVM: allows discrimination of AArch32 sysreg access
   arm64: KVM: trap VM system registers until MMU and caches are ON
   ARM: KVM: introduce kvm_p*d_addr_end
   arm64: KVM: flush VM pages before letting the guest enable caches

I tested the first 5 patches on APM arm64 board, and only after
applying the 5 patches, qemu can boot kernel successfully, otherwise
kernel can't be booted from qemu.

Thanks Marc.


Thanks,
--
Ming Lei
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [GIT PULL] KVM/ARM for 3.15

2014-03-04 Thread Ming Lei
On Wed, Mar 5, 2014 at 1:23 PM, Ming Lei ming@canonical.com wrote:
 On Tue, Mar 4, 2014 at 10:27 AM, Marc Zyngier marc.zyng...@arm.com

 Marc Zyngier (12):
   arm64: KVM: force cache clean on page fault when caches are off
   arm64: KVM: allows discrimination of AArch32 sysreg access
   arm64: KVM: trap VM system registers until MMU and caches are ON
   ARM: KVM: introduce kvm_p*d_addr_end
   arm64: KVM: flush VM pages before letting the guest enable caches

 I tested the first 5 patches on APM arm64 board, and only after
 applying the 5 patches, qemu can boot kernel successfully, otherwise
 kernel can't be booted from qemu.

For the first 5 patches, please feel free to add:

 Tested-by: Ming Lei ming@canonical.com


Thanks,
--
Ming Lei
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html