Changlog:
changes from Avi's comments:
- comment for FNAME(fetch)
- add annotations (__acquires, __releases) for page_fault_start and
page_fault_end
changes from Marcelo's comments:
- remove mmu_is_invalid
- make release noslot pfn path more readable
The last patch which introduces
We can not directly call kvm_release_pfn_clean to release the pfn
since we can meet noslot pfn which is used to cache mmio info into
spte
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmu.c |6 --
arch/x86/kvm/paging_tmpl.h |6 --
2 files
Remove mmu_is_invalid and use is_invalid_pfn instead
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmu.c |5 -
arch/x86/kvm/paging_tmpl.h |4 ++--
2 files changed, 2 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kvm/mmu.c
It helps us to cleanup release pfn in the later patches
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmu.c | 29 ++---
arch/x86/kvm/paging_tmpl.h | 18 --
2 files changed, 26 insertions(+), 21 deletions(-)
Let it return emulate state instead of spte like __direct_map
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/paging_tmpl.h | 31 ---
1 files changed, 12 insertions(+), 19 deletions(-)
diff --git a/arch/x86/kvm/paging_tmpl.h
The only difference between FNAME(update_pte) and FNAME(pte_prefetch)
is that the former is allowed to prefetch gfn from dirty logged slot,
so introduce a common function to prefetch spte
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/paging_tmpl.h | 58
The function does not depend on guest mmu mode, move it out from
paging_tmpl.h
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmu.c | 36
arch/x86/kvm/paging_tmpl.h | 24 ++--
2 files changed,
Wrap the common operations into these two functions
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmu.c | 55
arch/x86/kvm/paging_tmpl.h | 12 -
2 files changed, 40 insertions(+), 27 deletions(-)
On Thu, 20 Sep 2012 16:03:17 -0400
Don Slutz d...@cloudswitch.com wrote:
Fix duplicate name (kvmclock = kvm_clock2) also.
Signed-off-by: Don Slutz d...@cloudswitch.com
---
target-i386/cpu.c | 12
1 files changed, 8 insertions(+), 4 deletions(-)
diff --git
-Original Message-
From: Avi Kivity [mailto:a...@redhat.com]
Sent: Thursday, September 20, 2012 5:20 PM
To: Hao, Xudong
Cc: Marcelo Tosatti; kvm@vger.kernel.org; Zhang, Xiantao
Subject: Re: [PATCH v3] kvm/fpu: Enable fully eager restore kvm FPU
On guest entry:
if
Hi Mel,
Thank you for this series. I have applied on clean 3.6-rc5 and tested, and
it works well for me - the lock contention is (still) gone and
isolate_freepages_block is much reduced.
Here is a typical test with these patches:
# grep -F '[k]' report | head -8
65.20% qemu-kvm
On 21.09.2012, at 07:44, Paul Mackerras wrote:
This enables userspace to get and set various SPRs (special-purpose
registers) using the KVM_[GS]ET_ONE_REG ioctls. With this, userspace
can get and set all the SPRs that are part of the guest state, either
through the KVM_[GS]ET_REGS ioctls,
ForAllxx:
run object method on every object in list
ForAll[a,b,c].print()
Signed-off-by: Jiří Župka jzu...@redhat.com
---
client/shared/base_utils.py | 81 +++---
1 files changed, 67 insertions(+), 14 deletions(-)
diff --git
When autotest tries add tap to bridge then test recognize if
test is bridge is standard linux or OpenVSwitch.
And adds some utils for bridge manipulation.
Signed-off-by: Jiří Župka jzu...@redhat.com
---
client/shared/openvswitch.py | 583 ++
Allow creating of machine with tap devices which are not
connected to bridge.
Add function for fill virtnet object with address.
Signed-off-by: Jiří Župka jzu...@redhat.com
---
client/tests/virt/virttest/kvm_vm.py |9 +
client/tests/virt/virttest/utils_misc.py |3 ++-
Signed-off-by: Jiří Župka jzu...@redhat.com
---
client/tests/virt/kvm/control.parallel|2 +-
client/tests/virt/virttest/libvirt_xml.py |2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/client/tests/virt/kvm/control.parallel
b/client/tests/virt/kvm/control.parallel
On Fri, Sep 21, 2012 at 10:13:33AM +0100, Richard Davies wrote:
Hi Mel,
Thank you for this series. I have applied on clean 3.6-rc5 and tested, and
it works well for me - the lock contention is (still) gone and
isolate_freepages_block is much reduced.
Excellent!
Here is a typical test
Mel Gorman wrote:
I did manage to get a couple which were slightly worse, but nothing like as
bad as before. Here are the results:
# grep -F '[k]' report | head -8
45.60% qemu-kvm [kernel.kallsyms] [k] clear_page_c
11.26% qemu-kvm [kernel.kallsyms] [k]
On Fri, Sep 21, 2012 at 11:15:51AM +0200, Alexander Graf wrote:
On 21.09.2012, at 07:44, Paul Mackerras wrote:
+union kvmppc_one_reg {
+ u32 wval;
+ u64 dval;
Phew. Is this guaranteed to always pad on the right, rather than left?
Absolutely (for big-endian targets). A
On Fri, Sep 21, 2012 at 10:17:01AM +0100, Richard Davies wrote:
Richard Davies wrote:
I did manage to get a couple which were slightly worse, but nothing like as
bad as before. Here are the results:
# grep -F '[k]' report | head -8
45.60% qemu-kvm [kernel.kallsyms] [k]
On 21.09.2012, at 11:52, Paul Mackerras wrote:
On Fri, Sep 21, 2012 at 11:15:51AM +0200, Alexander Graf wrote:
On 21.09.2012, at 07:44, Paul Mackerras wrote:
+union kvmppc_one_reg {
+ u32 wval;
+ u64 dval;
Phew. Is this guaranteed to always pad on the right, rather than
Hi Andrew,
Richard Davies and Shaohua Li have both reported lock contention
problems in compaction on the zone and LRU locks as well as
significant amounts of time being spent in compaction. This series
aims to reduce lock contention and scanning rates to reduce that CPU
usage. Richard reported
This reverts
mm-compaction-check-lock-contention-first-before-taking-lock.patch as it
is replaced by a later patch in the series.
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm/compaction.c |5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/mm/compaction.c
This reverts
mm-compaction-abort-compaction-loop-if-lock-is-contended-or-run-too-long.patch
as it is replaced by a later patch in the series.
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm/compaction.c | 12 +---
mm/internal.h |2 +-
2 files changed, 6 insertions(+), 8
Compactions migrate scanner acquires the zone-lru_lock when scanning a range
of pages looking for LRU pages to acquire. It does this even if there are
no LRU pages in the range. If multiple processes are compacting then this
can cause severe locking contention. To make matters worse commit
From: Shaohua Li s...@fusionio.com
Changelog since V2
o Fix BUG_ON triggered due to pages left on cc.migratepages
o Make compact_zone_order() require non-NULL arg `contended'
Changelog since V1
o only abort the compaction if lock is contended or run too long
o Rearranged the code by Andrea
This reverts commit 7db8889a (mm: have order 0 compaction start off
where it left) and commit de74f1cc (mm: have order 0 compaction start
near a pageblock with free pages). These patches were a good idea and
tests confirmed that they massively reduced the amount of scanning but
the
Compactions free scanner acquires the zone-lock when checking for PageBuddy
pages and isolating them. It does this even if there are no PageBuddy pages
in the range.
This patch defers acquiring the zone lock for as long as possible. In the
event there are no free pages in the pageblock then the
This is almost entirely based on Rik's previous patches and discussions
with him about how this might be implemented.
Order 0 compaction stops when enough free pages of the correct page
order have been coalesced. When doing subsequent higher order allocations,
it is possible for compaction to
When compaction was implemented it was known that scanning could potentially
be excessive. The ideal was that a counter be maintained for each pageblock
but maintaining this information would incur a severe penalty due to a
shared writable cache line. It has reached the point where the scanning
This reverts
mm-compaction-abort-compaction-loop-if-lock-is-contended-or-run-too-long-fix
as it is replaced by a later patch in the series.
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm/compaction.c |3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/compaction.c
This is v3 of the ACPI memory hotplug functionality. Only x86_64 target is
supported
for now.
Overview:
Dimm device layout is modeled with a new qemu command line
-dimm id=name,size=sz,node=pxm,populated=on|off
The starting physical address for all dimms is calculated automatically from
top
This allows to extract the beginning, end and name of a Device object.
Signed-off-by: Vasilis Liaskovitis vasilis.liaskovi...@profitbricks.com
---
tools/acpi_extract.py | 28
1 files changed, 28 insertions(+), 0 deletions(-)
diff --git a/tools/acpi_extract.py
Extend the DSDT to include methods for handling memory hot-add and hot-remove
notifications and memory device status requests. These functions are called
from the memory device SSDT methods.
Signed-off-by: Vasilis Liaskovitis vasilis.liaskovi...@profitbricks.com
---
src/acpi-dsdt.dsl | 70
The memory device generation is guided by qemu paravirt info. Seabios
first uses the info to setup SRAT entries for the hotplug-able memory slots.
Afterwards, build_memssdt uses the created SRAT entries to generate
appropriate memory device objects. One memory device (and corresponding SRAT
entry)
Live migration works after memory hot-add events, as long as the
qemu command line -dimm arguments are changed on the destination host
to specify populated=on for the dimms that have been hot-added.
If a command-line change has not occured, the destination host does not yet
have the corresponding
The numa_fw_cfg paravirt interface is extended to include SRAT information for
all hotplug-able dimms. There are 3 words for each hotplug-able memory slot,
denoting start address, size and node proximity. The new info is appended after
existing numa info, so that the fw_cfg layout does not break.
Guest can respond to ACPI hotplug events e.g. with _EJ or _OST method.
This patch implements a tail queue to store guest notifications for memory
hot-add and hot-remove requests.
Guest responses for memory hotplug command on a per-dimm basis can be detected
with the new hmp command info memhp or
Signed-off-by: Vasilis Liaskovitis vasilis.liaskovi...@profitbricks.com
---
src/acpi-dsdt.dsl | 15 +++
src/ssdt-mem.dsl |4
2 files changed, 19 insertions(+), 0 deletions(-)
diff --git a/src/acpi-dsdt.dsl b/src/acpi-dsdt.dsl
index 0d37bbc..8a18770 100644
---
This will allow us to update dimm state on OSPM-initiated eject operations e.g.
with echo 1 /sys/bus/acpi/devices/PNP0C80\:00/eject
Signed-off-by: Vasilis Liaskovitis vasilis.liaskovi...@profitbricks.com
---
docs/specs/acpi_hotplug.txt |7 +++
hw/acpi_piix4.c |5 +
Dimm physical address offsets are calculated automatically and memory map is
adjusted accordingly. If a DIMM can fit before the PCI_HOLE_START (currently
0xe000), it will be added normally, otherwise its physical address will be
above 4GB.
Also create memory bus on i440fx-pcihost device.
pcimem_start and pcimem64_start are adjusted from srat entries. For this reason,
paravirt info (NUMA SRAT entries and number of cpus) need to be read before
pci_setup.
Imho, this is an ugly code change since SRAT bios tables and number of
cpus have to be read earlier. But the advantage is that no
in case of hot-remove failure on a guest that does not implement _OST,
the dimm bitmaps in qemu and Seabios show the dimm as unplugged, but the dimm
is still present on the qdev/memory bus. To avoid this inconsistency, we set the
dimm state to active/hot-plugged on a reset of the associated
This allows qemu to receive notifications from the guest OS on success or
failure of a memory hotplug request. The guest OS needs to implement the _OST
functionality for this to work (linux-next: http://lkml.org/lkml/2012/6/25/321)
This patch also updates dimm bitmap state and hot-remove pending
Add support for _OST method. _OST method will write into the correct I/O byte to
signal success / failure of hot-add or hot-remove to qemu.
Signed-off-by: Vasilis Liaskovitis vasilis.liaskovi...@profitbricks.com
---
src/acpi-dsdt.dsl | 50 ++
query-balloon and info balloon should report total memory available to the
guest.
balloon inflate/ deflate can also use all memory available to the guest (initial
+ hotplugged memory)
Ballon driver has been minimaly tested with the patch, please review and test.
Caveat: if the guest does not
Returns total physical memory available to guest in bytes, including hotplugged
memory. Note that the number reported here may be different from what the guest
sees e.g. if the guest has not logically onlined hotplugged memory.
This functionality is provided independently of a balloon device,
A 32-byte register is used to present up to 256 hotplug-able memory devices
to BIOS and OSPM. Hot-add and hot-remove functions trigger an ACPI hotplug
event through these. Only reads are allowed from these registers.
An ACPI hot-remove event but needs to wait for OSPM to eject the device.
We use
Each hotplug-able memory slot is a DimmDevice. All DimmDevices are attached
to a new bus called DimmBus. This bus is introduced so that we no longer
depend on hotplug-capability of main system bus (the main bus does not allow
hotplugging). The DimmBus should be attached to a chipset Device (i440fx
Example:
-dimm id=dimm0,size=512M,node=0,populated=off
will define a 512M memory slot belonging to numa node 0.
When populated=on, a DimmDevice is created and hot-plugged at system startup.
Signed-off-by: Vasilis Liaskovitis vasilis.liaskovi...@profitbricks.com
---
hw/Makefile.objs |2 +-
Define SSDT hotplug-able memory devices in _SB namespace. The dynamically
generated SSDT includes per memory device hotplug methods. These methods
just call methods defined in the DSDT. Also dynamically generate a MTFY
method and a MEON array of the online/available memory devices. ACPI
Qemu already calculates the 32-bit and 64-bit PCI starting offsets based on
initial memory and hotplug-able dimms. This info needs to be passed to Seabios
for PCI initialization.
Signed-off-by: Vasilis Liaskovitis vasilis.liaskovi...@profitbricks.com
---
docs/specs/fwcfg.txt |9 +
Initialize the 32-bit and 64-bit pci starting offsets from values passed in by
the qemu paravirt interface QEMU_CFG_PCI_WINDOW. Qemu calculates the starting
offsets based on initial memory and hotplug-able dimms.
Signed-off-by: Vasilis Liaskovitis vasilis.liaskovi...@profitbricks.com
---
In some special scenarios like #vcpu = #pcpu, PLE handler may
prove very costly, because there is no need to iterate over vcpus
and do unsuccessful yield_to burning CPU.
An idea to solve this is:
1) As Avi had proposed we can modify hardware ple_window
dynamically to avoid frequent PL-exit.
From: Raghavendra K T raghavendra...@linux.vnet.ibm.com
When total number of VCPUs of system is less than or equal to physical CPUs,
PLE exits become costly since each VCPU can have dedicated PCPU, and
trying to find a target VCPU to yield_to just burns time in PLE handler.
This patch reduces
From: Raghavendra K T raghavendra...@linux.vnet.ibm.com
When PLE handler fails to find a better candidate to yield_to, it
goes back and does spin again. This is acceptable when we do not
have overcommit.
But in overcommitted scenarios (especially when we have large
number of small guests), it is
On 2012-09-20 19:17, Dean Pucsek wrote:
On 2012-09-19, at 7:45 AM, Jan Kiszka jan.kis...@siemens.com wrote:
On 2012-09-19 16:38, Avi Kivity wrote:
On 09/17/2012 10:36 PM, Dean Pucsek wrote:
Hello,
For my Masters thesis I am investigating the usage of Intel VT-x and
branch tracing in
On Fri, Sep 21, 2012 at 10:39:52AM +0200, Igor Mammedov wrote:
On Thu, 20 Sep 2012 16:03:17 -0400
Don Slutz d...@cloudswitch.com wrote:
Fix duplicate name (kvmclock = kvm_clock2) also.
Signed-off-by: Don Slutz d...@cloudswitch.com
---
target-i386/cpu.c | 12
1
On 09/21/2012 08:00 AM, Raghavendra K T wrote:
From: Raghavendra K T raghavendra...@linux.vnet.ibm.com
When total number of VCPUs of system is less than or equal to physical CPUs,
PLE exits become costly since each VCPU can have dedicated PCPU, and
trying to find a target VCPU to yield_to just
On 09/21/12 08:36, Eduardo Habkost wrote:
On Fri, Sep 21, 2012 at 10:39:52AM +0200, Igor Mammedov wrote:
On Thu, 20 Sep 2012 16:03:17 -0400
Don Slutz d...@cloudswitch.com wrote:
Fix duplicate name (kvmclock = kvm_clock2) also.
Signed-off-by: Don Slutz d...@cloudswitch.com
---
On 9/21/2012 4:59 AM, Raghavendra K T wrote:
In some special scenarios like #vcpu = #pcpu, PLE handler may
prove very costly,
Yes.
because there is no need to iterate over vcpus
and do unsuccessful yield_to burning CPU.
An idea to solve this is:
1) As Avi had proposed we can modify
On 09/21/2012 08:00 AM, Raghavendra K T wrote:
From: Raghavendra K T raghavendra...@linux.vnet.ibm.com
When PLE handler fails to find a better candidate to yield_to, it
goes back and does spin again. This is acceptable when we do not
have overcommit.
But in overcommitted scenarios (especially
On Fri, 21 Sep 2012 17:30:20 +0530
Raghavendra K T raghavendra...@linux.vnet.ibm.com wrote:
From: Raghavendra K T raghavendra...@linux.vnet.ibm.com
When PLE handler fails to find a better candidate to yield_to, it
goes back and does spin again. This is acceptable when we do not
have
On 09/21/2012 06:46 AM, Mel Gorman wrote:
Hi Andrew,
Richard Davies and Shaohua Li have both reported lock contention
problems in compaction on the zone and LRU locks as well as
significant amounts of time being spent in compaction. This series
aims to reduce lock contention and scanning rates
On 09/21/2012 09:46 AM, Takuya Yoshikawa wrote:
On Fri, 21 Sep 2012 17:30:20 +0530
Raghavendra K T raghavendra...@linux.vnet.ibm.com wrote:
From: Raghavendra K T raghavendra...@linux.vnet.ibm.com
When PLE handler fails to find a better candidate to yield_to, it
goes back and does spin again.
On Thu, Sep 20, 2012 at 04:06:27PM -0400, Don Slutz wrote:
From http://lkml.indiana.edu/hypermail/linux/kernel/1205.0/00100.html
EAX should be KVM_CPUID_FEATURES (0x4001) not 0.
Added hypervisor-vendor=kvm0 to get the older CPUID result. kvm1 selects the
newer one.
Why not just make
On 09/20/2012 05:13 PM, Alex Williamson wrote:
On Thu, 2012-09-20 at 16:36 -0400, Etienne Martineau wrote:
On 09/20/2012 03:37 PM, Alex Williamson wrote:
On Thu, 2012-09-20 at 15:08 -0400, Etienne Martineau wrote:
On 09/20/2012 02:16 PM, Alex Williamson wrote:
On Thu, 2012-09-20 at 13:27
On Fri, 2012-09-21 at 11:17 -0400, Etienne Martineau wrote:
On 09/20/2012 05:13 PM, Alex Williamson wrote:
On Thu, 2012-09-20 at 16:36 -0400, Etienne Martineau wrote:
On 09/20/2012 03:37 PM, Alex Williamson wrote:
On Thu, 2012-09-20 at 15:08 -0400, Etienne Martineau wrote:
On 09/20/2012
(CC'ing Casey on this, as I recommend his setup for an Intel-based solution)
Hello ShadesOfGrey,
Hehehe, talk about timing ;)
On Sep 20, 2012, at 5:15 PM, ShadesOfGrey shades_of_g...@earthlink.net wrote:
I'm looking to build a new personal computer. I want it to function as a
Linux
vfoi-pci supports a mechanism like KVM's irqfd for unmasking an
interrupt through an eventfd. There are two ways to shutdown this
interface: 1) close the eventfd, 2) ioctl (such as disabling the
interrupt). Both of these do the release through a workqueue,
which can result in a segfault if two
On 09/21/2012 11:49 AM, Alex Williamson wrote:
On Fri, 2012-09-21 at 11:17 -0400, Etienne Martineau wrote:
On 09/20/2012 05:13 PM, Alex Williamson wrote:
On Thu, 2012-09-20 at 16:36 -0400, Etienne Martineau wrote:
On 09/20/2012 03:37 PM, Alex Williamson wrote:
On Thu, 2012-09-20 at 15:08
On 09/21/2012 06:32 PM, Rik van Riel wrote:
On 09/21/2012 08:00 AM, Raghavendra K T wrote:
From: Raghavendra K T raghavendra...@linux.vnet.ibm.com
When total number of VCPUs of system is less than or equal to physical
CPUs,
PLE exits become costly since each VCPU can have dedicated PCPU, and
On Fri, Sep 21, 2012 at 11:46:15AM +0100, Mel Gorman wrote:
This reverts
mm-compaction-check-lock-contention-first-before-taking-lock.patch as it
is replaced by a later patch in the series.
Signed-off-by: Mel Gorman mgor...@suse.de
Acked-by: Rafael Aquini aqu...@redhat.com
--
To
On Fri, Sep 21, 2012 at 11:46:16AM +0100, Mel Gorman wrote:
This reverts
mm-compaction-abort-compaction-loop-if-lock-is-contended-or-run-too-long-fix
as it is replaced by a later patch in the series.
Signed-off-by: Mel Gorman mgor...@suse.de
Acked-by: Rafael Aquini aqu...@redhat.com
--
To
On Fri, Sep 21, 2012 at 11:46:17AM +0100, Mel Gorman wrote:
This reverts
mm-compaction-abort-compaction-loop-if-lock-is-contended-or-run-too-long.patch
as it is replaced by a later patch in the series.
Signed-off-by: Mel Gorman mgor...@suse.de
Acked-by: Rafael Aquini aqu...@redhat.com
--
On 09/21/2012 07:22 PM, Rik van Riel wrote:
On 09/21/2012 09:46 AM, Takuya Yoshikawa wrote:
On Fri, 21 Sep 2012 17:30:20 +0530
Raghavendra K T raghavendra...@linux.vnet.ibm.com wrote:
From: Raghavendra K T raghavendra...@linux.vnet.ibm.com
When PLE handler fails to find a better candidate to
On Fri, Sep 21, 2012 at 11:46:18AM +0100, Mel Gorman wrote:
From: Shaohua Li s...@fusionio.com
Changelog since V2
o Fix BUG_ON triggered due to pages left on cc.migratepages
o Make compact_zone_order() require non-NULL arg `contended'
Changelog since V1
o only abort the compaction if
On Fri, Sep 21, 2012 at 11:46:19AM +0100, Mel Gorman wrote:
Compactions migrate scanner acquires the zone-lru_lock when scanning a range
of pages looking for LRU pages to acquire. It does this even if there are
no LRU pages in the range. If multiple processes are compacting then this
can cause
On Fri, Sep 21, 2012 at 11:46:20AM +0100, Mel Gorman wrote:
Compactions free scanner acquires the zone-lock when checking for PageBuddy
pages and isolating them. It does this even if there are no PageBuddy pages
in the range.
This patch defers acquiring the zone lock for as long as possible.
On Fri, Sep 21, 2012 at 11:46:21AM +0100, Mel Gorman wrote:
This reverts commit 7db8889a (mm: have order 0 compaction start off
where it left) and commit de74f1cc (mm: have order 0 compaction start
near a pageblock with free pages). These patches were a good idea and
tests confirmed that
On Fri, Sep 21, 2012 at 11:46:22AM +0100, Mel Gorman wrote:
When compaction was implemented it was known that scanning could potentially
be excessive. The ideal was that a counter be maintained for each pageblock
but maintaining this information would incur a severe penalty due to a
shared
On Fri, Sep 21, 2012 at 11:46:23AM +0100, Mel Gorman wrote:
This is almost entirely based on Rik's previous patches and discussions
with him about how this might be implemented.
Order 0 compaction stops when enough free pages of the correct page
order have been coalesced. When doing
To emulate level triggered interrupts, add a resample option to
KVM_IRQFD. When specified, a new resamplefd is provided that notifies
the user when the irqchip has been resampled by the VM. This may, for
instance, indicate an EOI. Also in this mode, posting of an interrupt
through an irqfd only
Ping. There don't seem to be any objections to this. Thanks,
Alex
On Fri, 2012-09-14 at 17:04 -0600, Alex Williamson wrote:
On Fri, 2012-09-14 at 17:01 -0600, Alex Williamson wrote:
Same goodness as v4, plus:
- Addressed comments by Blue Swirl (thanks for the review)
(hopefully
On 09/21/12 10:18, Eduardo Habkost wrote:
On Thu, Sep 20, 2012 at 04:06:27PM -0400, Don Slutz wrote:
From http://lkml.indiana.edu/hypermail/linux/kernel/1205.0/00100.html
EAX should be KVM_CPUID_FEATURES (0x4001) not 0.
Added hypervisor-vendor=kvm0 to get the older CPUID result. kvm1
On 09/21/2012 05:51 AM, Marcelo Tosatti wrote:
On Fri, Sep 21, 2012 at 12:02:46AM +0300, Dor Laor wrote:
On 09/12/2012 06:39 PM, Marcelo Tosatti wrote:
HW TSC scaling is a feature of AMD processors that allows a
multiplier to be specified to the TSC frequency exposed to the guest.
KVM also
On 09/21/12 16:49, Eduardo Habkost wrote:
On Fri, Sep 21, 2012 at 04:26:58PM -0400, Don Slutz wrote:
On 09/21/12 10:18, Eduardo Habkost wrote:
On Thu, Sep 20, 2012 at 04:06:27PM -0400, Don Slutz wrote:
From http://lkml.indiana.edu/hypermail/linux/kernel/1205.0/00100.html
EAX should be
On Fri, 21 Sep 2012 11:46:18 +0100
Mel Gorman mgor...@suse.de wrote:
Changelog since V2
o Fix BUG_ON triggered due to pages left on cc.migratepages
o Make compact_zone_order() require non-NULL arg `contended'
Changelog since V1
o only abort the compaction if lock is contended or run too
On Fri, 21 Sep 2012 11:46:20 +0100
Mel Gorman mgor...@suse.de wrote:
Compactions free scanner acquires the zone-lock when checking for PageBuddy
pages and isolating them. It does this even if there are no PageBuddy pages
in the range.
This patch defers acquiring the zone lock for as long as
On Fri, 21 Sep 2012 11:46:22 +0100
Mel Gorman mgor...@suse.de wrote:
When compaction was implemented it was known that scanning could potentially
be excessive. The ideal was that a counter be maintained for each pageblock
but maintaining this information would incur a severe penalty due to a
On Fri, Sep 21, 2012 at 05:28:27PM -0400, Don Slutz wrote:
On 09/21/12 16:49, Eduardo Habkost wrote:
On Fri, Sep 21, 2012 at 04:26:58PM -0400, Don Slutz wrote:
On 09/21/12 10:18, Eduardo Habkost wrote:
On Thu, Sep 20, 2012 at 04:06:27PM -0400, Don Slutz wrote:
From
On 09/21/2012 05:17 AM, Vasilis Liaskovitis wrote:
Guest can respond to ACPI hotplug events e.g. with _EJ or _OST method.
This patch implements a tail queue to store guest notifications for memory
hot-add and hot-remove requests.
Guest responses for memory hotplug command on a per-dimm basis
On 09/21/2012 05:17 AM, Vasilis Liaskovitis wrote:
Returns total physical memory available to guest in bytes, including
hotplugged
memory. Note that the number reported here may be different from what the
guest
sees e.g. if the guest has not logically onlined hotplugged memory.
This
Also known as Paravirtualization CPUIDs.
This is primarily done so that the guest will think it is running
under vmware when hypervisor-vendor=vmware is specified as a
property of a cpu.
This depends on:
http://lists.gnu.org/archive/html/qemu-devel/2012-09/msg01400.html
As far as I know it is
The check using INT_MAX (2147483647) is wrong in this case.
Signed-off-by: Fred Oliveira folive...@cloudswitch.com
Signed-off-by: Don Slutz d...@cloudswitch.com
---
target-i386/cpu.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/target-i386/cpu.c b/target-i386/cpu.c
Also known as Paravirtualization level or maximim cpuid function present in
this leaf.
This is just the EAX value for 0x4000.
QEMU knows this is KVM_CPUID_SIGNATURE (0x4000).
This is based on:
Microsoft Hypervisor CPUID Leaves:
Fix duplicate name (kvmclock = kvm_clock2) also.
Signed-off-by: Don Slutz d...@cloudswitch.com
---
target-i386/cpu.c | 12
1 files changed, 8 insertions(+), 4 deletions(-)
diff --git a/target-i386/cpu.c b/target-i386/cpu.c
index 0313cf5..25ca986 100644
--- a/target-i386/cpu.c
+++
These are modeled after x86_cpuid_get_xlevel and x86_cpuid_set_xlevel.
Signed-off-by: Don Slutz d...@cloudswitch.com
---
target-i386/cpu.c | 29 +
1 files changed, 29 insertions(+), 0 deletions(-)
diff --git a/target-i386/cpu.c b/target-i386/cpu.c
index
These are modeled after x86_cpuid_get_xlevel and x86_cpuid_set_xlevel.
Signed-off-by: Don Slutz d...@cloudswitch.com
---
target-i386/cpu.c |8
target-i386/cpu.h |2 ++
2 files changed, 10 insertions(+), 0 deletions(-)
diff --git a/target-i386/cpu.c b/target-i386/cpu.c
index
Also known as Paravirtualization level.
This change is based on:
Microsoft Hypervisor CPUID Leaves:
http://msdn.microsoft.com/en-us/library/windows/hardware/ff542428%28v=vs.85%29.aspx
Linux kernel change starts with:
http://fixunix.com/kernel/538707-use-cpuid-communicate-hypervisor.html
1 - 100 of 115 matches
Mail list logo