On 9/15/15 8:54 PM, Paolo Bonzini wrote:
On 15/09/2015 12:30, Wanpeng Li wrote:
+ if (!nested) {
+ vpid = find_first_zero_bit(vmx_vpid_bitmap, VMX_NR_VPIDS);
+ if (vpid < VMX_NR_VPIDS) {
vmx->vpid = vpid;
__set_bit(vpid,
On 2015-09-16 04:36, Wanpeng Li wrote:
> On 9/16/15 1:32 AM, Jan Kiszka wrote:
>> On 2015-09-15 12:14, Wanpeng Li wrote:
>>> On 9/14/15 10:54 PM, Jan Kiszka wrote:
Last but not least: the guest can now easily exhaust the host's pool of
vpid by simply spawning plenty of VCPUs for L2, no?
On 2015-09-15 23:19, Alex Williamson wrote:
> On Mon, 2015-04-13 at 02:32 +0300, Nadav Amit wrote:
>> Due to old Seabios bug, QEMU reenable LINT0 after reset. This bug is long
>> gone
>> and therefore this hack is no longer needed. Since it violates the
>> specifications, it is removed.
>>
>>
Enhance allocate/free_vid to handle shadow vpid.
Signed-off-by: Wanpeng Li
---
arch/x86/kvm/vmx.c | 24 +++-
1 file changed, 11 insertions(+), 13 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 9ff6a3f..4956081 100644
---
v2 -> v3:
* enhance allocate/free_vpid as Jan's suggestion
* add more comments to 2/2
v1 -> v2:
* enhance allocate/free_vpid to handle shadow vpid
* drop empty space
* allocate shadow vpid during initialization
* For each nested vmentry, if vpid12 is changed, reuse shadow vpid w/ an
VPID is used to tag address space and avoid a TLB flush. Currently L0 use
the same VPID to run L1 and all its guests. KVM flushes VPID when switching
between L1 and L2.
This patch advertises VPID to the L1 hypervisor, then address space of L1 and
L2 can be separately treated and avoid TLB
On Tue, Sep 15, 2015 at 09:24:15PM -0400, Tejun Heo wrote:
> Hello, Paul.
>
> On Tue, Sep 15, 2015 at 04:38:18PM -0700, Paul E. McKenney wrote:
> > Well, the decision as to what is too big for -stable is owned by the
> > -stable maintainers, not by me.
>
> Is it tho? Usually the subsystem
On 9/16/15 1:32 AM, Jan Kiszka wrote:
On 2015-09-15 12:14, Wanpeng Li wrote:
On 9/14/15 10:54 PM, Jan Kiszka wrote:
Last but not least: the guest can now easily exhaust the host's pool of
vpid by simply spawning plenty of VCPUs for L2, no? Is this acceptable
or should there be some limit?
I
https://bugzilla.kernel.org/show_bug.cgi?id=104631
Bug ID: 104631
Summary: Error on walk_shadow_page_get_mmio_spte when starting
Qemu
Product: Virtualization
Version: unspecified
Kernel Version: 4.3.0-rc1
Hardware: All
https://bugzilla.kernel.org/show_bug.cgi?id=104631
--- Comment #1 from Tasos Sahanidis ---
Created attachment 187701
--> https://bugzilla.kernel.org/attachment.cgi?id=187701=edit
Output from dmesg
--
You are receiving this mail because:
You are watching the assignee of
On 9/16/15 6:00 AM, David Matlack wrote:
On Tue, Sep 15, 2015 at 12:04 AM, Oliver Yang wrote:
Hi Guys,
I found below patch for KVM TSC trapping / migration support,
https://lkml.org/lkml/2011/1/6/90
It seemed the patch were not merged in Linux mainline.
So I have 3
--
BancoPosta Loans
Viale Europa,
175-00144 Roma,
Italy.
Email: bancopost...@gmail.com
Guten Tag meine Damen und Herren,
Brauchen Sie ein Darlehen für einen bestimmten Zweck?
BancoPosta Bank in Italien haben einen günstigen Kredit für Sie. Wir
bieten gesicherten und ungesicherten
Hello, Paul.
On Tue, Sep 15, 2015 at 04:38:18PM -0700, Paul E. McKenney wrote:
> Well, the decision as to what is too big for -stable is owned by the
> -stable maintainers, not by me.
Is it tho? Usually the subsystem maintainer knows the best and has
most say in it. I was mostly curious
Hello,
On Tue, Sep 15, 2015 at 02:38:30PM -0700, Paul E. McKenney wrote:
> I did take a shot at adding the rcu_sync stuff during this past merge
> window, but it did not converge quickly enough to make it. It looks
> quite good for the next merge window. There have been changes in most
> of the
On Tue, Sep 15, 2015 at 06:28:11PM -0400, Tejun Heo wrote:
> Hello,
>
> On Tue, Sep 15, 2015 at 02:38:30PM -0700, Paul E. McKenney wrote:
> > I did take a shot at adding the rcu_sync stuff during this past merge
> > window, but it did not converge quickly enough to make it. It looks
> > quite
On Tue, Sep 15, 2015 at 12:04 AM, Oliver Yang wrote:
> Hi Guys,
>
> I found below patch for KVM TSC trapping / migration support,
>
> https://lkml.org/lkml/2011/1/6/90
>
> It seemed the patch were not merged in Linux mainline.
>
> So I have 3 questions here,
>
> 1. Can
On Tue, 15 Sep 2015 14:41:54 +0800
Jason Wang wrote:
> We only want zero length mmio eventfd to be registered on
> KVM_FAST_MMIO_BUS. So check this explicitly when arg->len is zero to
> make sure this.
>
> Cc: sta...@vger.kernel.org
> Cc: Gleb Natapov
>
Am 14.09.2015 um 11:38 schrieb Wanpeng Li:
> If there is already some polling ongoing, it's impossible to disable the
> polling, since as soon as somebody sets halt_poll_ns to 0, polling will
> never stop, as grow and shrink are only handled if halt_poll_ns is != 0.
>
> This patch fix it by
Currently, if we had a zero length mmio eventfd assigned on
KVM_MMIO_BUS. It will never be found by kvm_io_bus_cmp() since it
always compares the kvm_io_range() with the length that guest
wrote. This will cause e.g for vhost, kick will be trapped by qemu
userspace instead of vhost. Fixing this by
This patch factors out core eventfd assign/deassign logic and leaves
the argument checking and bus index selection to callers.
Cc: sta...@vger.kernel.org
Cc: Gleb Natapov
Cc: Paolo Bonzini
Signed-off-by: Jason Wang
---
We register wildcard mmio eventfd on two buses, once for KVM_MMIO_BUS
and once on KVM_FAST_MMIO_BUS but with a single iodev
instance. This will lead to an issue: kvm_io_bus_destroy() knows
nothing about the devices on two buses pointing to a single dev. Which
will lead to double free[1] during
Cc: Gleb Natapov
Cc: Paolo Bonzini
Signed-off-by: Jason Wang
---
Documentation/virtual/kvm/api.txt | 7 ++-
include/uapi/linux/kvm.h | 1 +
virt/kvm/kvm_main.c | 1 +
3 files changed, 8 insertions(+), 1
Cc: Gleb Natapov
Cc: Paolo Bonzini
Signed-off-by: Jason Wang
---
arch/x86/kvm/trace.h | 18 ++
arch/x86/kvm/vmx.c | 1 +
arch/x86/kvm/x86.c | 1 +
3 files changed, 20 insertions(+)
diff --git
Hi Guys,
I found below patch for KVM TSC trapping / migration support,
https://lkml.org/lkml/2011/1/6/90
It seemed the patch were not merged in Linux mainline.
So I have 3 questions here,
1. Can KVM support TSC trapping today? If not, what is the plan?
2. What is the solution if my SMP
On Tue, 15 Sep 2015 14:41:56 +0800
Jason Wang wrote:
> We register wildcard mmio eventfd on two buses, once for KVM_MMIO_BUS
> and once on KVM_FAST_MMIO_BUS but with a single iodev
> instance. This will lead to an issue: kvm_io_bus_destroy() knows
> nothing about the devices
We only want zero length mmio eventfd to be registered on
KVM_FAST_MMIO_BUS. So check this explicitly when arg->len is zero to
make sure this.
Cc: sta...@vger.kernel.org
Cc: Gleb Natapov
Cc: Paolo Bonzini
Signed-off-by: Jason Wang
---
Hi:
This series fixes two issues of fast mmio eventfd:
1) A single iodev instance were registerd on two buses: KVM_MMIO_BUS
and KVM_FAST_MMIO_BUS. This will cause double in
ioeventfd_destructor()
2) A zero length iodev on KVM_MMIO_BUS will never be found but
kvm_io_bus_cmp(). This will
On Tue, 15 Sep 2015 14:41:55 +0800
Jason Wang wrote:
> This patch factors out core eventfd assign/deassign logic and leaves
> the argument checking and bus index selection to callers.
>
> Cc: sta...@vger.kernel.org
> Cc: Gleb Natapov
> Cc: Paolo Bonzini
On Tue, 15 Sep 2015 14:41:57 +0800
Jason Wang wrote:
> Currently, if we had a zero length mmio eventfd assigned on
> KVM_MMIO_BUS. It will never be found by kvm_io_bus_cmp() since it
> always compares the kvm_io_range() with the length that guest
> wrote. This will cause e.g
On Mon, Sep 14, 2015 at 04:46:28PM +0100, Marc Zyngier wrote:
> On 14/09/15 16:06, Will Deacon wrote:
> > When restoring the system register state for an AArch32 guest at EL2,
> > writes to DACR32_EL2 may not be correctly synchronised by Cortex-A57,
> > which can lead to the guest effectively
Hi Dmitri,
On Fri, Sep 11, 2015 at 03:40:00PM +0100, Dimitri John Ledkov wrote:
> If one typically only boots full disk-images, one wouldn't necessaraly
> want to statically link glibc, for the guest-init feature of the
> kvmtool. As statically linked glibc triggers haevy security
> maintainance.
On Tue, Sep 15, 2015 at 9:27 AM, Paolo Bonzini wrote:
> This new statistic can help diagnosing VCPUs that, for any reason,
> trigger bad behavior of halt_poll_ns autotuning.
>
> For example, say halt_poll_ns = 48, and wakeups are spaced exactly
> like 479us, 481us, 479us,
Hi Paolo,
(Please ignore the previous mail that did not include "qemu-devel")
Thanks for your review and suggestions. I'll fix this patch
accordingly and please see my replies below.
best regards,
Houcheng Lin
2015-09-15 17:41 GMT+08:00 Paolo Bonzini :
> This is okay and
On Tue, 15 Sep 2015 18:12:43 +0200
Paolo Bonzini wrote:
>
>
> On 14/08/2015 16:52, Xiao Guangrong wrote:
> > NFIT is defined in ACPI 6.0: 5.2.25 NVDIMM Firmware Interface Table (NFIT)
> >
> > Currently, we only support PMEM mode. Each device has 3 tables:
> > - SPA table,
On Tue, Sep 15, 2015 at 04:16:07PM +0100, Andre Przywara wrote:
> Hi Christoffer,
>
> On 14/09/15 12:42, Christoffer Dall wrote:
>
> Where is this done? I see that the physical dist state is altered on the
> actual IRQ forwarding, but not on later exits/entries? Do you mean
>
On 9/14/15 10:54 PM, Jan Kiszka wrote:
On 2015-09-14 14:52, Wanpeng Li wrote:
VPID is used to tag address space and avoid a TLB flush. Currently L0 use
the same VPID to run L1 and all its guests. KVM flushes VPID when switching
between L1 and L2.
This patch advertises VPID to the L1
On 15/09/2015 04:11, Houcheng Lin wrote:
> The OS dependent code for android that implement functions missing in bionic
> C, including:
> - getdtablesize(): call getrlimit() instead.
This is okay and can be done unconditionally (introduce a new
qemu_getdtablesize function that is defined
On 9/15/15 12:08 AM, Bandan Das wrote:
Wanpeng Li writes:
VPID is used to tag address space and avoid a TLB flush. Currently L0 use
the same VPID to run L1 and all its guests. KVM flushes VPID when switching
between L1 and L2.
This patch advertises VPID to the L1
From: "Suzuki K. Poulose"
At the moment, we only support maximum of 3-level page table for
swapper. With 48bit VA, 64K has only 3 levels and 4K uses section
mapping. Add support for 4-level page table for swapper, needed
by 16K pages.
Cc: Ard Biesheuvel
On 15/09/2015 08:41, Jason Wang wrote:
> +With KVM_CAP_FAST_MMIO, a zero length mmio eventfd is allowed for
> +kernel to ignore the length of guest write and get a possible faster
> +response. Note the speedup may only work on some specific
> +architectures and setups. Otherwise, it's as fast as
On 15/09/2015 08:41, Jason Wang wrote:
> Hi:
>
> This series fixes two issues of fast mmio eventfd:
>
> 1) A single iodev instance were registerd on two buses: KVM_MMIO_BUS
>and KVM_FAST_MMIO_BUS. This will cause double in
>ioeventfd_destructor()
> 2) A zero length iodev on
On 01/09/2015 11:14, Stefan Hajnoczi wrote:
>> >
>> > When I was digging into live migration code, i noticed that the same MR
>> > name may
>> > cause the name "idstr", please refer to qemu_ram_set_idstr().
>> >
>> > Since nvdimm devices do not have parent-bus, it will trigger the abort()
>>
Although the ThumbEE registers and traps were present in earlier
versions of the v8 architecture, it was retrospectively removed and so
we can do the same.
Cc: Marc Zyngier
Signed-off-by: Will Deacon
---
arch/arm64/include/asm/kvm_arm.h | 1 -
On 15 September 2015 at 17:15, Will Deacon wrote:
> Although the ThumbEE registers and traps were present in earlier
> versions of the v8 architecture, it was retrospectively removed and so
> we can do the same.
>
> Cc: Marc Zyngier
> Signed-off-by:
On 15/09/2015 15:36, Christian Borntraeger wrote:
> I am wondering why the old code behaved in such fatal ways. Is there
> some interaction between waiting for a reschedule in the
> synchronize_sched writer and some fork code actually waiting for the
> read side to get the lock together with
On 07/09/2015 16:11, Igor Mammedov wrote:
>
> here is common concepts that could be reused.
> - on physical system both DIMM and NVDIMM devices use
> the same slots. We could share QEMU's '-m slots' option between
> both devices. An alternative to not sharing would be to introduce
>
Hi
Please, send any topic that you are interested in covering.
At the end of Monday I will send an email with the agenda or the
cancellation of the call, so hurry up.
After discussions on the QEMU Summit, we are going to have always open a
KVM call where you can add topics.
Call details:
By
On 09/09/2015 08:05, Xiao Guangrong wrote:
> + if (!guest_cpuid_has_pcommit(vcpu) && nested)
> + vmx->nested.nested_vmx_secondary_ctls_high &=
> + ~SECONDARY_EXEC_PCOMMIT;
It is legal to set CPUID multiple times, so I think we need
if
From: "Suzuki K. Poulose"
36bit VA lets us use 2 level page tables while limiting the
available address space to 64GB.
Cc: Mark Rutland
Cc: Catalin Marinas
Cc: Will Deacon
Signed-off-by: Suzuki K.
From: "Suzuki K. Poulose"
Rearrange the code for fake pgd handling, which is applicable
to only ARM64. The intention is to keep the common code cleaner,
unaware of the underlying hacks.
Cc: kvm...@lists.cs.columbia.edu
Cc: christoffer.d...@linaro.org
Cc:
From: "Suzuki K. Poulose"
This patch turns on the 16K page support in the kernel. We
support 48bit VA (4 level page tables) and 47bit VA (3 level
page tables).
Cc: Mark Rutland
Cc: Catalin Marinas
Cc: Will Deacon
From: "Suzuki K. Poulose"
We use !CONFIG_ARM64_64K_PAGES for CONFIG_ARM64_4K_PAGES
(and vice versa) in code. It all worked well, so far since
we only had two options. Now, with the introduction of 16K,
these cases will break. This patch cleans up the code to
use the
From: "Suzuki K. Poulose"
The existing fake pgd handling code assumes that the stage-2 entry
level can only be one level down that of the host, which may not be
true always(e.g, with the introduction of 16k pagesize).
e.g.
With 16k page size and 48bit VA and 40bit IPA we
From: "Suzuki K. Poulose"
Now that we can calculate the number of levels required for
mapping a va width, reserve exact number of pages that would
be required to cover the idmap. The idmap should be able to handle
the maximum physical address size supported.
Cc: Ard
On 25/08/2015 18:03, Stefan Hajnoczi wrote:
>> >
>> > +static uint64_t get_file_size(int fd)
>> > +{
>> > +struct stat stat_buf;
>> > +uint64_t size;
>> > +
>> > +if (fstat(fd, _buf) < 0) {
>> > +return 0;
>> > +}
>> > +
>> > +if (S_ISREG(stat_buf.st_mode)) {
>> > +
On 14/08/2015 16:52, Xiao Guangrong wrote:
> NFIT is defined in ACPI 6.0: 5.2.25 NVDIMM Firmware Interface Table (NFIT)
>
> Currently, we only support PMEM mode. Each device has 3 tables:
> - SPA table, define the PMEM region info
>
> - MEM DEV table, it has the @handle which is used to
On Tue, 15 Sep 2015 17:07:55 +0200
Paolo Bonzini wrote:
> On 15/09/2015 08:41, Jason Wang wrote:
> > +With KVM_CAP_FAST_MMIO, a zero length mmio eventfd is allowed for
> > +kernel to ignore the length of guest write and get a possible faster
> > +response. Note the speedup
On 02/09/2015 21:01, Sebastian Schütte wrote:
> I inserted some printk() lines into init_vmcb() around the call of
> svm_set_guest_pat() to print out the g_pat value as well as
> svm->vcpu.vcpu_id and noticed that something was off:
>
> Initially, the PATs of all VCPUs are set to
On 15/09/2015 18:44, Cornelia Huck wrote:
>> > Can you explain why? If there is any non-zero valid length, "wildcard
>> > length" (represented by zero) would also make sense.
> What is a wildcard match supposed to mean in this case? The datamatch
> field contains the queue index for the device
On Tue, 09/15 10:11, Houcheng Lin wrote:
> From: Houcheng
Thanks for sending patches! Please include qemu-de...@nongnu.org list for QEMU
changes.
Fam
>
> This patch is to build qemu in android ndk tool-chain, and has been tested in
> both
> x86_64 and x86 android
This adds real and virtual mode handlers for the H_PUT_TCE_INDIRECT and
H_STUFF_TCE hypercalls for user space emulated devices such as IBMVIO
devices or emulated PCI. These calls allow adding multiple entries
(up to 512) into the TCE table in one call which saves time on
transition between kernel
At the moment spapr_tce_tables is not protected against races. This makes
use of RCU-variants of list helpers. As some bits are executed in real
mode, this makes use of just introduced list_for_each_entry_rcu_notrace().
This converts release_spapr_tce_table() to a RCU scheduled handler.
At the moment pages used for TCE tables (in addition to pages addressed
by TCEs) are not counted in locked_vm counter so a malicious userspace
tool can call ioctl(KVM_CREATE_SPAPR_TCE) as many times as RLIMIT_NOFILE and
lock a lot of memory.
This adds counting for pages used for TCE tables.
This
SPAPR_TCE_SHIFT is used in few places only and since IOMMU_PAGE_SHIFT_4K
can be easily used instead, remove SPAPR_TCE_SHIFT.
Signed-off-by: Alexey Kardashevskiy
---
arch/powerpc/include/asm/kvm_book3s_64.h | 2 --
arch/powerpc/kvm/book3s_64_vio.c | 3 ++-
The KVM_SMI capability is following the KVM_S390_SET_IRQ_STATE capability
which is "4.95", this changes the number of the KVM_SMI chapter to 4.96.
Signed-off-by: Alexey Kardashevskiy
---
Documentation/virtual/kvm/api.txt | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
This adds real and virtual mode handlers for the H_PUT_TCE_INDIRECT and
H_STUFF_TCE hypercalls for user space emulated devices such as IBMVIO
devices or emulated PCI. These calls allow adding multiple entries
(up to 512) into the TCE table in one call which saves time on
transition between kernel
The KVM_SMI capability is following the KVM_S390_SET_IRQ_STATE capability
which is "4.95", this changes the number of the KVM_SMI chapter to 4.96.
Signed-off-by: Alexey Kardashevskiy
---
Documentation/virtual/kvm/api.txt | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
At the moment spapr_tce_tables is not protected against races. This makes
use of RCU-variants of list helpers. As some bits are executed in real
mode, this makes use of just introduced list_for_each_entry_rcu_notrace().
This converts release_spapr_tce_table() to a RCU scheduled handler.
At the moment pages used for TCE tables (in addition to pages addressed
by TCEs) are not counted in locked_vm counter so a malicious userspace
tool can call ioctl(KVM_CREATE_SPAPR_TCE) as many times as RLIMIT_NOFILE and
lock a lot of memory.
This adds counting for pages used for TCE tables.
This
SPAPR_TCE_SHIFT is used in few places only and since IOMMU_PAGE_SHIFT_4K
can be easily used instead, remove SPAPR_TCE_SHIFT.
Signed-off-by: Alexey Kardashevskiy
---
arch/powerpc/include/asm/kvm_book3s_64.h | 2 --
arch/powerpc/kvm/book3s_64_vio.c | 3 ++-
Upcoming multi-tce support (H_PUT_TCE_INDIRECT/H_STUFF_TCE hypercalls)
will validate TCE (not to have unexpected bits) and IO address
(to be within the DMA window boundaries).
This introduces helpers to validate TCE and IO address.
Signed-off-by: Alexey Kardashevskiy
---
These patches enable in-kernel acceleration for H_PUT_TCE_INDIRECT and
H_STUFF_TCE hypercalls which allow doing multiple (up to 512) TCE entries
update in a single call saving time on switching context. QEMU already
supports these hypercalls so this is just an optimization.
Both HV and PR KVM
This helper translates vmalloc'd addresses to linear addresses.
It is only used by the KVM MMU code now and resides in the HV KVM code.
We will need it further in the TCE code and the DMA memory preregistration
code called in real mode.
This makes real_vmalloc_addr() public and moves it to the
This defines list_for_each_entry_rcu_notrace and list_entry_rcu_notrace
which use rcu_dereference_raw_notrace instead of rcu_dereference_raw.
This allows using list_for_each_entry_rcu_notrace in real mode (MMU is off).
Signed-off-by: Alexey Kardashevskiy
---
This reworks the existing H_PUT_TCE/H_GET_TCE handlers to have one
exit path. This allows next patch to add locks nicely.
This moves the ioba boundaries check to a helper and adds a check for
least bits which have to be zeros.
The patch is pretty mechanical (only check for least ioba bits is
This defines list_for_each_entry_rcu_notrace and list_entry_rcu_notrace
which use rcu_dereference_raw_notrace instead of rcu_dereference_raw.
This allows using list_for_each_entry_rcu_notrace in real mode (MMU is off).
Signed-off-by: Alexey Kardashevskiy
---
This reworks the existing H_PUT_TCE/H_GET_TCE handlers to have one
exit path. This allows next patch to add locks nicely.
This moves the ioba boundaries check to a helper and adds a check for
least bits which have to be zeros.
The patch is pretty mechanical (only check for least ioba bits is
These patches enable in-kernel acceleration for H_PUT_TCE_INDIRECT and
H_STUFF_TCE hypercalls which allow doing multiple (up to 512) TCE entries
update in a single call saving time on switching context. QEMU already
supports these hypercalls so this is just an optimization.
Both HV and PR KVM
This helper translates vmalloc'd addresses to linear addresses.
It is only used by the KVM MMU code now and resides in the HV KVM code.
We will need it further in the TCE code and the DMA memory preregistration
code called in real mode.
This makes real_vmalloc_addr() public and moves it to the
v1 -> v2:
* enhance allocate/free_vpid to handle shadow vpid
* drop empty space
* allocate shadow vpid during initialization
* For each nested vmentry, if vpid12 is changed, reuse shadow vpid w/ an
invvpid.
VPID is used to tag address space and avoid a TLB flush. Currently L0 use
the
VPID is used to tag address space and avoid a TLB flush. Currently L0 use
the same VPID to run L1 and all its guests. KVM flushes VPID when switching
between L1 and L2.
This patch advertises VPID to the L1 hypervisor, then address space of L1 and
L2 can be separately treated and avoid TLB
Enhance allocate/free_vid to handle shadow vpid.
Suggested-by: Wincy Van
Signed-off-by: Wanpeng Li
---
arch/x86/kvm/vmx.c | 33 +++--
1 file changed, 27 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/vmx.c
Hi,
I'm running Debian Jessie with KVM 2.1 on 6 nodes with an NFS3 back-end
and sometimes when I migrate a VM it suddenly 'hangs'. The KVM process
is running on the new node, eating up all it's CPU cycles but it does
not respond to anything anymore. I don't see any errors logged and it
seems
Tejun,
commit d59cfc09c32a2ae31f1c3bc2983a0cd79afb3f14 (sched, cgroup: replace
signal_struct->group_rwsem with a global percpu_rwsem) causes some noticably
hickups when starting several kvm guests (which libvirt will move into cgroups
- each vcpu thread and each i/o thread)
When you now start
Hi Paolo,
Thanks for your review and suggestions. I'll fix this patch accordingly.
Please also see my replies below.
best regards,
Houcheng Lin
2015-09-15 17:41 GMT+08:00 Paolo Bonzini :
>
> This is okay and can be done unconditionally (introduce a new
> qemu_getdtablesize
On Tue, Sep 15, 2015 at 06:42:19PM +0200, Paolo Bonzini wrote:
>
>
> On 15/09/2015 15:36, Christian Borntraeger wrote:
> > I am wondering why the old code behaved in such fatal ways. Is there
> > some interaction between waiting for a reschedule in the
> > synchronize_sched writer and some fork
Hi Christoffer,
On 14/09/15 12:42, Christoffer Dall wrote:
Where is this done? I see that the physical dist state is altered on the
actual IRQ forwarding, but not on later exits/entries? Do you mean
kvm_vgic_flush_hwstate() with "flush"?
>>>
>>> this is a bug and should be
The SECONDARY_EXEC_RDTSCP must be available iff RDTSCP is enabled in the
guest.
Signed-off-by: Paolo Bonzini
---
arch/x86/kvm/vmx.c | 11 ---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index
From: "Suzuki K. Poulose"
This series enables the 16K page size support on Linux for arm64.
Adds support for 48bit VA(4 level), 47bit VA(3 level) and
36bit VA(2 level) with 16K. 16K was a late addition to the architecture
and is not implemented by all CPUs. Added a check
From: "Suzuki K. Poulose"
{V}TCR_EL2_TG0 is a 2bit wide field, where:
00 - 4K
01 - 64K
10 - 16K
But we use only 1 bit, which has worked well so far since
we never cared about 16K. Fix it for 16K support.
Cc: Catalin Marinas
Cc: Will Deacon
From: "Suzuki K. Poulose"
We use section maps with 4K page size to create the swapper/idmaps.
So far we have used !64K or 4K checks to handle the case where we
use the section maps.
This patch adds a new symbol, ARM64_SWAPPER_USES_SECTION_MAPS, to
handle cases where we
From: "Suzuki K. Poulose"
Move the kernel pagetable (both swapper and idmap) definitions
from the generic asm/page.h to a new file, asm/kernel-pgtable.h.
This is mostly a cosmetic change, to clean up the asm/page.h to
get rid of the arch specific details which are not
From: "Suzuki K. Poulose"
Introduce helpers for finding the number of page table
levels required for a given VA width, shift for a particular
page table level.
Convert the existing users to the new helpers. More users
to follow.
Cc: Ard Biesheuvel
From: Ard Biesheuvel
This patch adds the page size to the arm64 kernel image header
so that one can infer the PAGESIZE used by the kernel. This will
be helpful to diagnose failures to boot the kernel with page size
not supported by the CPU.
Signed-off-by: Ard
From: "Suzuki K. Poulose"
Update the help text for ARM64_64K_PAGES to reflect the reality
about AArch32 support.
Cc: Mark Rutland
Cc: Catalin Marinas
Cc: Will Deacon
Signed-off-by: Suzuki K. Poulose
From: "Suzuki K. Poulose"
No functional changes. Group the common bits for VCTR_EL2
initialisation for better readability. The granule size
and the entry level are controlled by the page size.
Cc: Christoffer Dall
Cc: Marc Zyngier
On 09/09/2015 08:05, Xiao Guangrong wrote:
> Unify the update in vmx_cpuid_update()
>
> Signed-off-by: Xiao Guangrong
What if we instead start fresh from vmx_secondary_exec_control, like this:
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index
From: "Suzuki K. Poulose"
Ensure that the selected page size is supported by the
CPU(s).
Cc: Mark Rutland
Cc: Catalin Marinas
Cc: Will Deacon
Signed-off-by: Suzuki K. Poulose
On 26/08/2015 12:40, Xiao Guangrong wrote:
>>>
>>> +
>>> +size = get_file_size(fd);
>>> +buf = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
>>
>> I guess the user will want to choose between MAP_SHARED and MAP_PRIVATE.
>> This can be added in the future.
>
> Good idea,
On 2015-09-15 12:14, Wanpeng Li wrote:
> On 9/14/15 10:54 PM, Jan Kiszka wrote:
>> Last but not least: the guest can now easily exhaust the host's pool of
>> vpid by simply spawning plenty of VCPUs for L2, no? Is this acceptable
>> or should there be some limit?
>
> I reuse the value of vpid02
1 - 100 of 119 matches
Mail list logo