This is an RFC since I have not done any comparison with the approach
using for_each_set_bit() which can be seen in Avi's work.
Takuya
---
We did a simple test to see which requests we would get at the same time
in vcpu_enter_guest() and got the following numbers:
At 09/21/2012 07:20 PM, Vasilis Liaskovitis Wrote:
Initialize the 32-bit and 64-bit pci starting offsets from values passed in by
the qemu paravirt interface QEMU_CFG_PCI_WINDOW. Qemu calculates the starting
offsets based on initial memory and hotplug-able dimms.
This patch can't be applied if
At 09/21/2012 07:17 PM, Vasilis Liaskovitis Wrote:
pcimem_start and pcimem64_start are adjusted from srat entries. For this
reason,
paravirt info (NUMA SRAT entries and number of cpus) need to be read before
pci_setup.
Imho, this is an ugly code change since SRAT bios tables and number of
On 09/24/2012 02:24 PM, Takuya Yoshikawa wrote:
This is an RFC since I have not done any comparison with the approach
using for_each_set_bit() which can be seen in Avi's work.
Why not compare it? I think for_each_set_bit is better and it can
improve for all cases (in your patch, you did not
On Mon, Sep 24, 2012 at 03:24:47PM +0900, Takuya Yoshikawa wrote:
This is an RFC since I have not done any comparison with the approach
using for_each_set_bit() which can be seen in Avi's work.
Takuya
---
We did a simple test to see which requests we would get at the same time
in
In order to help PLE and pvticketlock converge I thought that a small
test code should be developed to test this in a predictable,
deterministic way.
The idea is to have a guest kernel module that spawn a new thread each
time you write to a /sys/ entry.
Each such a thread spins over a
On Fri, Sep 21, 2012 at 02:35:57PM -0700, Andrew Morton wrote:
On Fri, 21 Sep 2012 11:46:20 +0100
Mel Gorman mgor...@suse.de wrote:
Compactions free scanner acquires the zone-lock when checking for PageBuddy
pages and isolating them. It does this even if there are no PageBuddy pages
in
On Fri, Sep 21, 2012 at 02:36:56PM -0700, Andrew Morton wrote:
On Fri, 21 Sep 2012 11:46:22 +0100
Mel Gorman mgor...@suse.de wrote:
When compaction was implemented it was known that scanning could potentially
be excessive. The ideal was that a counter be maintained for each pageblock
but
On 09/24/2012 07:55 AM, Xiao Guangrong wrote:
On 07/10/2012 01:05 AM, Avi Kivity wrote:
Currently, any time a request bit is set (not too uncommon) we check all of
them.
This patchset optimizes the process slightly by skipping over unset bits
using
for_each_set_bit().
I also notice
On 09/24/2012 08:24 AM, Takuya Yoshikawa wrote:
This is an RFC since I have not done any comparison with the approach
using for_each_set_bit() which can be seen in Avi's work.
Takuya
---
We did a simple test to see which requests we would get at the same time
in vcpu_enter_guest()
On 09/24/2012 05:48 PM, Avi Kivity wrote:
On 09/24/2012 07:55 AM, Xiao Guangrong wrote:
On 07/10/2012 01:05 AM, Avi Kivity wrote:
Currently, any time a request bit is set (not too uncommon) we check all of
them.
This patchset optimizes the process slightly by skipping over unset bits
using
On Sat, Sep 22, 2012 at 01:46:57PM +, Blue Swirl wrote:
On Fri, Sep 21, 2012 at 11:17 AM, Vasilis Liaskovitis
vasilis.liaskovi...@profitbricks.com wrote:
Example:
-dimm id=dimm0,size=512M,node=0,populated=off
There should not be a need to introduce a new top level option,
instead you
On Mon, Sep 24, 2012 at 02:35:30PM +0800, Wen Congyang wrote:
At 09/21/2012 07:20 PM, Vasilis Liaskovitis Wrote:
Initialize the 32-bit and 64-bit pci starting offsets from values passed in
by
the qemu paravirt interface QEMU_CFG_PCI_WINDOW. Qemu calculates the
starting
offsets based
On 09/24/2012 12:19 PM, Xiao Guangrong wrote:
On 09/24/2012 05:48 PM, Avi Kivity wrote:
On 09/24/2012 07:55 AM, Xiao Guangrong wrote:
On 07/10/2012 01:05 AM, Avi Kivity wrote:
Currently, any time a request bit is set (not too uncommon) we check all
of them.
This patchset optimizes the
On 09/24/2012 09:16 AM, Gleb Natapov wrote:
On Mon, Sep 24, 2012 at 03:24:47PM +0900, Takuya Yoshikawa wrote:
This is an RFC since I have not done any comparison with the approach
using for_each_set_bit() which can be seen in Avi's work.
Takuya
---
We did a simple test to see which
On 09/24/2012 08:59 AM, Xiao Guangrong wrote:
On 09/24/2012 02:24 PM, Takuya Yoshikawa wrote:
This is an RFC since I have not done any comparison with the approach
using for_each_set_bit() which can be seen in Avi's work.
Why not compare it? I think for_each_set_bit is better and it can
On 09/24/2012 06:52 PM, Avi Kivity wrote:
On 09/24/2012 12:19 PM, Xiao Guangrong wrote:
On 09/24/2012 05:48 PM, Avi Kivity wrote:
On 09/24/2012 07:55 AM, Xiao Guangrong wrote:
On 07/10/2012 01:05 AM, Avi Kivity wrote:
Currently, any time a request bit is set (not too uncommon) we check all
On Mon, Sep 24, 2012 at 12:59:32PM +0800, Xiao Guangrong wrote:
On 09/23/2012 05:13 PM, Gleb Natapov wrote:
On Fri, Sep 21, 2012 at 02:57:19PM +0800, Xiao Guangrong wrote:
We can not directly call kvm_release_pfn_clean to release the pfn
since we can meet noslot pfn which is used to cache
On Fri, 2012-09-21 at 17:30 +0530, Raghavendra K T wrote:
+unsigned long rq_nr_running(void)
+{
+ return this_rq()-nr_running;
+}
+EXPORT_SYMBOL(rq_nr_running);
Uhm,.. no, that's a horrible thing to export.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of
On Fri, 2012-09-21 at 17:29 +0530, Raghavendra K T wrote:
In some special scenarios like #vcpu = #pcpu, PLE handler may
prove very costly, because there is no need to iterate over vcpus
and do unsuccessful yield_to burning CPU.
What's the costly thing? The vm-exit, the yield (which should be
On 09/24/2012 05:03 PM, Peter Zijlstra wrote:
On Fri, 2012-09-21 at 17:30 +0530, Raghavendra K T wrote:
+unsigned long rq_nr_running(void)
+{
+ return this_rq()-nr_running;
+}
+EXPORT_SYMBOL(rq_nr_running);
Uhm,.. no, that's a horrible thing to export.
True.. I had the same fear :).
Il 24/09/2012 13:28, Juan Quintela ha scritto:
Hi
Please send in any agenda items you are interested in covering.
URI parsing library for glusterfs: libxml2 vs. in-tree fork of the
same code.
Paolo
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to
On 09/24/2012 07:24 PM, Gleb Natapov wrote:
On Mon, Sep 24, 2012 at 12:59:32PM +0800, Xiao Guangrong wrote:
On 09/23/2012 05:13 PM, Gleb Natapov wrote:
On Fri, Sep 21, 2012 at 02:57:19PM +0800, Xiao Guangrong wrote:
We can not directly call kvm_release_pfn_clean to release the pfn
since we
On 09/24/2012 05:04 PM, Peter Zijlstra wrote:
On Fri, 2012-09-21 at 17:29 +0530, Raghavendra K T wrote:
In some special scenarios like #vcpu= #pcpu, PLE handler may
prove very costly, because there is no need to iterate over vcpus
and do unsuccessful yield_to burning CPU.
What's the costly
On Mon, Sep 24, 2012 at 07:49:37PM +0800, Xiao Guangrong wrote:
On 09/24/2012 07:24 PM, Gleb Natapov wrote:
On Mon, Sep 24, 2012 at 12:59:32PM +0800, Xiao Guangrong wrote:
On 09/23/2012 05:13 PM, Gleb Natapov wrote:
On Fri, Sep 21, 2012 at 02:57:19PM +0800, Xiao Guangrong wrote:
We can
On 09/24/2012 02:12 PM, Dor Laor wrote:
In order to help PLE and pvticketlock converge I thought that a small
test code should be developed to test this in a predictable,
deterministic way.
The idea is to have a guest kernel module that spawn a new thread each
time you write to a /sys/
On 09/24/2012 01:16 PM, Xiao Guangrong wrote:
On 09/24/2012 06:52 PM, Avi Kivity wrote:
On 09/24/2012 12:19 PM, Xiao Guangrong wrote:
On 09/24/2012 05:48 PM, Avi Kivity wrote:
On 09/24/2012 07:55 AM, Xiao Guangrong wrote:
On 07/10/2012 01:05 AM, Avi Kivity wrote:
Currently, any time a
On Fri, Sep 21, 2012 at 11:15:51AM +0200, Alexander Graf wrote:
So how about something like
#define kvmppc_set_reg(id, val, reg) { \
switch (one_reg_size(id)) { \
case 4: val.wval = reg; break; \
case 8: val.dval = reg; break; \
default: BUG(); \
} \
}
case KVM_REG_PPC_DAR:
On 21.09.2012, at 07:33, Paul Mackerras wrote:
The PAPR paravirtualization interface lets guests register three
different types of per-vCPU buffer areas in its memory for communication
with the hypervisor. These are called virtual processor areas (VPAs).
Currently the hypercalls to register
On 21.09.2012, at 07:35, Paul Mackerras wrote:
When a Book3S HV KVM guest is running, we need the host to be in
single-thread mode, that is, all of the cores (or at least all of
the cores where the KVM guest could run) to be running only one
active hardware thread. This is because of the
On 24.09.2012, at 14:16, Paul Mackerras wrote:
On Fri, Sep 21, 2012 at 11:15:51AM +0200, Alexander Graf wrote:
So how about something like
#define kvmppc_set_reg(id, val, reg) { \
switch (one_reg_size(id)) { \
case 4: val.wval = reg; break; \
case 8: val.dval = reg; break; \
On 09/24/2012 08:04 PM, Gleb Natapov wrote:
On Mon, Sep 24, 2012 at 07:49:37PM +0800, Xiao Guangrong wrote:
On 09/24/2012 07:24 PM, Gleb Natapov wrote:
On Mon, Sep 24, 2012 at 12:59:32PM +0800, Xiao Guangrong wrote:
On 09/23/2012 05:13 PM, Gleb Natapov wrote:
On Fri, Sep 21, 2012 at
On Mon, 2012-09-24 at 17:22 +0530, Raghavendra K T wrote:
On 09/24/2012 05:04 PM, Peter Zijlstra wrote:
On Fri, 2012-09-21 at 17:29 +0530, Raghavendra K T wrote:
In some special scenarios like #vcpu= #pcpu, PLE handler may
prove very costly, because there is no need to iterate over vcpus
On 21.09.2012, at 07:37, Paul Mackerras wrote:
There were a few places where we were traversing the list of runnable
threads in a virtual core, i.e. vc-runnable_threads, without holding
the vcore spinlock. This extends the places where we hold the vcore
spinlock to cover everywhere that we
On 21.09.2012, at 07:35, Paul Mackerras wrote:
This removes the powerpc generic updates of vcpu-cpu in load and
put, and moves them to the various backends.
The reason is that HV KVM does its own sauce with that field
and the generic updates might corrupt it. The field contains the
CPU#
On 21.09.2012, at 07:36, Paul Mackerras wrote:
When making a vcpu non-runnable we incorrectly changed the
thread IDs of all other threads on the core, just remove that
code.
Signed-off-by: Benjamin Herrenschmidt b...@kernel.crashing.org
Signed-off-by: Paul Mackerras pau...@samba.org
On 21.09.2012, at 07:39, Paul Mackerras wrote:
In the case where the host kernel is using a 64kB base page size and
the guest uses a 4k HPTE (hashed page table entry) to map an emulated
MMIO device, we were calculating the guest physical address wrongly.
We were calculating a gfn as the
On 09/22/12 09:18, Blue Swirl wrote:
On Sat, Sep 22, 2012 at 12:13 AM, Don Slutz d...@cloudswitch.com wrote:
Also known as Paravirtualization CPUIDs.
This is primarily done so that the guest will think it is running
under vmware when hypervisor-vendor=vmware is specified as a
property of a
On 09/24/2012 06:06 PM, Peter Zijlstra wrote:
On Mon, 2012-09-24 at 17:22 +0530, Raghavendra K T wrote:
On 09/24/2012 05:04 PM, Peter Zijlstra wrote:
On Fri, 2012-09-21 at 17:29 +0530, Raghavendra K T wrote:
In some special scenarios like #vcpu= #pcpu, PLE handler may
prove very costly,
On Fri, 21 Sep 2012 23:15:40 +0530
Raghavendra K T raghavendra...@linux.vnet.ibm.com wrote:
How about doing cond_resched() instead?
Actually, an actual call to yield() may be better.
That will set scheduler hints to make the scheduler pick
another task for one round, while preserving
On Mon, 2012-09-24 at 18:59 +0530, Raghavendra K T wrote:
However Rik had a genuine concern in the cases where runqueue is not
equally distributed and lockholder might actually be on a different run
queue but not running.
Load should eventually get distributed equally -- that's what the
On Mon, 24 Sep 2012 14:59:44 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
On 09/24/2012 02:24 PM, Takuya Yoshikawa wrote:
This is an RFC since I have not done any comparison with the approach
using for_each_set_bit() which can be seen in Avi's work.
Why not compare it?
On Mon, 24 Sep 2012 09:16:12 +0200
Gleb Natapov g...@redhat.com wrote:
Yes, for guests that do not enable steal time KVM_REQ_STEAL_UPDATE
should be never set, but currently it is. The patch (not tested) should
fix this.
Nice!
Takuya
diff --git a/arch/x86/kvm/x86.c
On Mon, 24 Sep 2012 12:18:15 +0200
Avi Kivity a...@redhat.com wrote:
On 09/24/2012 08:24 AM, Takuya Yoshikawa wrote:
This is an RFC since I have not done any comparison with the approach
using for_each_set_bit() which can be seen in Avi's work.
Takuya
---
We did a simple test
On 09/24/2012 05:28 AM, Xudong Hao wrote:
Enable KVM FPU fully eager restore, if there is other FPU state which isn't
tracked by CR0.TS bit.
v4 changes from v3:
- Wrap up some confused code with a clear functio lazy_fpu_allowed()
- Update fpu while update cr4 too.
v3 changes from v2:
-
On 09/24/2012 07:24 PM, Peter Zijlstra wrote:
On Mon, 2012-09-24 at 18:59 +0530, Raghavendra K T wrote:
However Rik had a genuine concern in the cases where runqueue is not
equally distributed and lockholder might actually be on a different run
queue but not running.
Load should eventually
Also known as Paravirtualization CPUIDs.
This is primarily done so that the guest will think it is running
under vmware when hypervisor-vendor=vmware is specified as a
property of a cpu.
This depends on:
http://lists.gnu.org/archive/html/qemu-devel/2012-09/msg01400.html
As far as I know it is
Signed-off-by: Don Slutz d...@cloudswitch.com
---
target-i386/cpu.c | 12
1 files changed, 8 insertions(+), 4 deletions(-)
diff --git a/target-i386/cpu.c b/target-i386/cpu.c
index 0313cf5..25ca986 100644
--- a/target-i386/cpu.c
+++ b/target-i386/cpu.c
@@ -87,10 +87,14 @@ static
Also known as Paravirtualization level or maximim cpuid function present in
this leaf.
This is just the EAX value for 0x4000.
QEMU knows this is KVM_CPUID_SIGNATURE (0x4000).
This is based on:
Microsoft Hypervisor CPUID Leaves:
These are modeled after x86_cpuid_get_xlevel and x86_cpuid_set_xlevel.
Signed-off-by: Don Slutz d...@cloudswitch.com
---
target-i386/cpu.c | 29 +
1 files changed, 29 insertions(+), 0 deletions(-)
diff --git a/target-i386/cpu.c b/target-i386/cpu.c
index
This is used to set the cpu object's hypervisor level to the default for
Microsoft's Hypervisor.
Signed-off-by: Don Slutz d...@cloudswitch.com
---
target-i386/cpu.c |9 +
target-i386/cpu.h |2 ++
2 files changed, 11 insertions(+), 0 deletions(-)
diff --git a/target-i386/cpu.c
Also known as Paravirtualization level.
This change is based on:
Microsoft Hypervisor CPUID Leaves:
http://msdn.microsoft.com/en-us/library/windows/hardware/ff542428%28v=vs.85%29.aspx
Linux kernel change starts with:
http://fixunix.com/kernel/538707-use-cpuid-communicate-hypervisor.html
Also known as Paravirtualization level.
This change is based on:
Microsoft Hypervisor CPUID Leaves:
http://msdn.microsoft.com/en-us/library/windows/hardware/ff542428%28v=vs.85%29.aspx
Linux kernel change starts with:
http://fixunix.com/kernel/538707-use-cpuid-communicate-hypervisor.html
Also known as Paravirtualization vendor.
This is EBX, ECX, EDX data for 0x4000.
QEMU knows this is KVM_CPUID_SIGNATURE (0x4000).
This is based on:
Microsoft Hypervisor CPUID Leaves:
http://msdn.microsoft.com/en-us/library/windows/hardware/ff542428%28v=vs.85%29.aspx
Linux kernel
These are modeled after x86_cpuid_set_vendor and x86_cpuid_get_vendor.
Since kvm's vendor is shorter, the test for correct size is removed and zero
padding is added.
Set Microsoft's Vendor now that we can. Value defined in:
On Mon, 24 Sep 2012 12:09:00 +0200
Avi Kivity a...@redhat.com wrote:
while (vcpu-request) {
xchg(vcpu-request, request);
for_each_set_bit(request) {
clear_bit(X);
..
}
}
In fact I had something like that in one of the earlier
Also known as Paravirtualization vendor.
This change is based on:
Microsoft Hypervisor CPUID Leaves:
http://msdn.microsoft.com/en-us/library/windows/hardware/ff542428%28v=vs.85%29.aspx
Linux kernel change starts with:
http://fixunix.com/kernel/538707-use-cpuid-communicate-hypervisor.html
Also known as Paravirtualization vendor.
This change is based on:
Microsoft Hypervisor CPUID Leaves:
http://msdn.microsoft.com/en-us/library/windows/hardware/ff542428%28v=vs.85%29.aspx
Linux kernel change starts with:
http://fixunix.com/kernel/538707-use-cpuid-communicate-hypervisor.html
Signed-off-by: Don Slutz d...@cloudswitch.com
---
target-i386/cpu.c | 57 +++-
target-i386/cpu.h | 19 +
2 files changed, 74 insertions(+), 2 deletions(-)
diff --git a/target-i386/cpu.c b/target-i386/cpu.c
index a929b64..9ab29a7
Signed-off-by: Don Slutz d...@cloudswitch.com
---
target-i386/cpu.h |4
1 files changed, 4 insertions(+), 0 deletions(-)
diff --git a/target-i386/cpu.h b/target-i386/cpu.h
index ebb3498..254ddef 100644
--- a/target-i386/cpu.h
+++ b/target-i386/cpu.h
@@ -812,6 +812,10 @@ typedef struct
Signed-off-by: Don Slutz d...@cloudswitch.com
---
target-i386/cpu.c | 66 +
1 files changed, 66 insertions(+), 0 deletions(-)
diff --git a/target-i386/cpu.c b/target-i386/cpu.c
index 9ab29a7..8bb20c7 100644
--- a/target-i386/cpu.c
+++
This was taken from:
http://article.gmane.org/gmane.comp.emulators.kvm.devel/22643
Signed-off-by: Don Slutz d...@cloudswitch.com
---
target-i386/cpu.c | 32
1 files changed, 32 insertions(+), 0 deletions(-)
diff --git a/target-i386/cpu.c b/target-i386/cpu.c
Signed-off-by: Don Slutz d...@cloudswitch.com
---
target-i386/kvm.c | 19 +++
1 files changed, 19 insertions(+), 0 deletions(-)
diff --git a/target-i386/kvm.c b/target-i386/kvm.c
index f8a5177..ff82034 100644
--- a/target-i386/kvm.c
+++ b/target-i386/kvm.c
@@ -454,6 +454,25 @@
Signed-off-by: Don Slutz d...@cloudswitch.com
---
target-i386/cpu.c | 11 +++
1 files changed, 11 insertions(+), 0 deletions(-)
diff --git a/target-i386/cpu.c b/target-i386/cpu.c
index b77dbfe..1d81f00 100644
--- a/target-i386/cpu.c
+++ b/target-i386/cpu.c
@@ -2001,6 +2001,17 @@ void
Hi,
On Fri, Sep 21, 2012 at 04:03:26PM -0600, Eric Blake wrote:
On 09/21/2012 05:17 AM, Vasilis Liaskovitis wrote:
Guest can respond to ACPI hotplug events e.g. with _EJ or _OST method.
This patch implements a tail queue to store guest notifications for memory
hot-add and hot-remove
On 07.09.2012, at 00:56, Scott Wood wrote:
On 09/06/2012 09:56 AM, Bhushan Bharat-R65777 wrote:
-Original Message-
From: Wood Scott-B07421
Sent: Thursday, September 06, 2012 4:57 AM
To: Bhushan Bharat-R65777
Cc: kvm-...@vger.kernel.org; kvm@vger.kernel.org; ag...@suse.de;
On 09/24/2012 04:14 PM, Takuya Yoshikawa wrote:
On Mon, 24 Sep 2012 12:18:15 +0200
Avi Kivity a...@redhat.com wrote:
On 09/24/2012 08:24 AM, Takuya Yoshikawa wrote:
This is an RFC since I have not done any comparison with the approach
using for_each_set_bit() which can be seen in Avi's
On 09/24/2012 04:32 PM, Takuya Yoshikawa wrote:
On Mon, 24 Sep 2012 12:09:00 +0200
Avi Kivity a...@redhat.com wrote:
while (vcpu-request) {
xchg(vcpu-request, request);
for_each_set_bit(request) {
clear_bit(X);
..
}
}
In fact I
Hi Linus,
The following changes since commit c46de2263f42fb4bbde411b9126f471e9343cb22:
Merge branch 'for-linus' of git://git.kernel.dk/linux-block (2012-09-19
11:04:34 -0700)
are available in the git repository at:
git://github.com/awilliam/linux-vfio.git tags/vfio-for-linus
for you to
On 09/21/2012 03:00 PM, Raghavendra K T wrote:
From: Raghavendra K T raghavendra...@linux.vnet.ibm.com
When PLE handler fails to find a better candidate to yield_to, it
goes back and does spin again. This is acceptable when we do not
have overcommit.
But in overcommitted scenarios
On Sat, Sep 22, 2012 at 02:15:28PM +, Blue Swirl wrote:
+
+/* Function to configure memory offsets of hotpluggable dimms */
+
+target_phys_addr_t pc_set_hp_memory_offset(uint64_t size)
+{
+target_phys_addr_t ret;
+
+/* on first call, initialize ram_hp_offset */
+
On Mon, 2012-09-24 at 17:26 +0200, Avi Kivity wrote:
I think this is a no-op these (CFS) days. To get schedule() to do
anything, you need to wake up a task, or let time pass, or block.
Otherwise it will see that nothing has changed and as far as it's
concerned you're still the best task to be
On 21.08.2012, at 15:51, Bharat Bhushan wrote:
This patch defines the interface parameter for KVM_SET_GUEST_DEBUG
ioctl support. Follow up patches will use this for setting up
hardware breakpoints, watchpoints and software breakpoints.
Signed-off-by: Bharat Bhushan
On 09/21/2012 08:24 PM, Raghavendra K T wrote:
On 09/21/2012 06:32 PM, Rik van Riel wrote:
On 09/21/2012 08:00 AM, Raghavendra K T wrote:
From: Raghavendra K T raghavendra...@linux.vnet.ibm.com
When total number of VCPUs of system is less than or equal to physical
CPUs,
PLE exits become
On Fri, Sep 14, 2012 at 05:01:35PM -0600, Alex Williamson wrote:
Same goodness as v4, plus:
- Addressed comments by Blue Swirl (thanks for the review)
(hopefully w/o breaking anything wrt slow bar endianness)
- Fixed a couple checkpatch warnings that snuck in
BTW, this works fine
On 09/24/2012 05:34 PM, Peter Zijlstra wrote:
On Mon, 2012-09-24 at 17:26 +0200, Avi Kivity wrote:
I think this is a no-op these (CFS) days. To get schedule() to do
anything, you need to wake up a task, or let time pass, or block.
Otherwise it will see that nothing has changed and as far as
On 09/24/2012 03:54 PM, Peter Zijlstra wrote:
On Mon, 2012-09-24 at 18:59 +0530, Raghavendra K T wrote:
However Rik had a genuine concern in the cases where runqueue is not
equally distributed and lockholder might actually be on a different run
queue but not running.
Load should eventually
On Mon, 2012-09-24 at 17:43 +0200, Avi Kivity wrote:
Wouldn't this correspond to the scheduler interrupt firing and causing a
reschedule? I thought the timer was programmed for exactly the point in
time that CFS considers the right time for a switch. But I'm basing
this on my mental model of
On 09/24/2012 05:52 PM, Peter Zijlstra wrote:
On Mon, 2012-09-24 at 17:43 +0200, Avi Kivity wrote:
Wouldn't this correspond to the scheduler interrupt firing and causing a
reschedule? I thought the timer was programmed for exactly the point in
time that CFS considers the right time for a
Hi Ingo,
Please consider pulling,
- Arnaldo
The following changes since commit 1e6dd8adc78d4a153db253d051fd4ef6c49c9019:
perf: Fix off by one test in perf_reg_value() (2012-09-19 17:08:40 +0200)
are available in the git repository at:
On Mon, 2012-09-24 at 17:51 +0200, Avi Kivity wrote:
On 09/24/2012 03:54 PM, Peter Zijlstra wrote:
On Mon, 2012-09-24 at 18:59 +0530, Raghavendra K T wrote:
However Rik had a genuine concern in the cases where runqueue is not
equally distributed and lockholder might actually be on a
From: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
Add 'perf kvm stat' support to analyze kvm vmexit/mmio/ioport smartly
Usage:
- kvm stat
run a command and gather performance counter statistics, it is the alias of
perf stat
- trace kvm events:
perf kvm stat record, or, if other
From: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
Exporting KVM exit information to userspace to be consumed by perf.
Signed-off-by: Dong Hao haod...@linux.vnet.ibm.com
[ Dong Hao haod...@linux.vnet.ibm.com: rebase it on acme's git tree ]
Signed-off-by: Xiao Guangrong
On Mon, 2012-09-24 at 17:58 +0200, Avi Kivity wrote:
There is the TSC deadline timer mode of newer Intels. Programming the
timer is a simple wrmsr, and it will fire immediately if it already
expired. Unfortunately on AMDs it is not available, and on virtual
hardware it will be slow (~1-2
On 09/24/2012 05:41 PM, Avi Kivity wrote:
case 2)
rq1 : vcpu1-wait(lockA) (spinning)
rq2 : vcpu3 (running) , vcpu2-holding(lockA) [scheduled out]
I agree that checking rq1 length is not proper in this case, and as you
rightly pointed out, we are in trouble here.
On 09/24/2012 06:05 PM, Peter Zijlstra wrote:
On Mon, 2012-09-24 at 17:58 +0200, Avi Kivity wrote:
There is the TSC deadline timer mode of newer Intels. Programming the
timer is a simple wrmsr, and it will fire immediately if it already
expired. Unfortunately on AMDs it is not available, and
On Mon, 2012-09-24 at 18:10 +0200, Avi Kivity wrote:
Its also still a LAPIC write -- disguised as an MSR though :/
It's probably a whole lot faster though.
I've been told its not, I haven't tried it.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to
On Mon, 2012-09-24 at 18:06 +0200, Avi Kivity wrote:
We would probably need a -sched_exit() preempt notifier to make this
work. Peter, I know how much you love those, would it be acceptable?
Where exactly do you want this? TASK_DEAD? or another exit?
--
To unsubscribe from this list: send
On 21.08.2012, at 15:52, Bharat Bhushan wrote:
This patch adds the debug stub support on booke/bookehv.
Now QEMU debug stub can use hw breakpoint, watchpoint and
software breakpoint to debug guest.
Signed-off-by: Bharat Bhushan bharat.bhus...@freescale.com
---
On 09/24/2012 06:03 PM, Peter Zijlstra wrote:
On Mon, 2012-09-24 at 17:51 +0200, Avi Kivity wrote:
On 09/24/2012 03:54 PM, Peter Zijlstra wrote:
On Mon, 2012-09-24 at 18:59 +0530, Raghavendra K T wrote:
However Rik had a genuine concern in the cases where runqueue is not
equally
On 09/24/2012 06:13 PM, Peter Zijlstra wrote:
On Mon, 2012-09-24 at 18:10 +0200, Avi Kivity wrote:
Its also still a LAPIC write -- disguised as an MSR though :/
It's probably a whole lot faster though.
I've been told its not, I haven't tried it.
I'll see if I can find a machine with it
On 09/24/2012 06:14 PM, Peter Zijlstra wrote:
On Mon, 2012-09-24 at 18:06 +0200, Avi Kivity wrote:
We would probably need a -sched_exit() preempt notifier to make this
work. Peter, I know how much you love those, would it be acceptable?
Where exactly do you want this? TASK_DEAD? or
On 21.08.2012, at 15:51, Bharat Bhushan wrote:
Like other places, use thread_struct to get vcpu reference.
Please remove the definition of SPRN_SPRG_R/WVCPU as well.
Alex
Signed-off-by: Bharat Bhushan bharat.bhus...@freescale.com
---
arch/powerpc/kernel/asm-offsets.c |2 +-
On Mon, 24 Sep 2012 10:39:38 +0100
Mel Gorman mgor...@suse.de wrote:
On Fri, Sep 21, 2012 at 02:36:56PM -0700, Andrew Morton wrote:
Also, what has to be done to avoid the polling altogether? eg/ie, zap
a pageblock's PB_migrate_skip synchronously, when something was done to
that
-Original Message-
From: kvm-ow...@vger.kernel.org [mailto:kvm-ow...@vger.kernel.org] On
Behalf Of Avi Kivity
Sent: Monday, September 24, 2012 10:17 PM
To: Hao, Xudong
Cc: kvm@vger.kernel.org; Zhang, Xiantao
Subject: Re: [PATCH v4] kvm/fpu: Enable fully eager restore kvm FPU
On
This reduces unnecessary interrupts that host could send to guest while
guest is in the progress of irq handling.
If one vcpu is handling the irq, while another interrupt comes, in
handle_edge_irq(), the guest will mask the interrupt via mask_msi_irq()
which is a very heavy operation that goes
On Fri, Sep 21, 2012 at 11:15:51AM +0200, Alexander Graf wrote:
So how about something like
#define kvmppc_set_reg(id, val, reg) { \
switch (one_reg_size(id)) { \
case 4: val.wval = reg; break; \
case 8: val.dval = reg; break; \
default: BUG(); \
} \
}
case KVM_REG_PPC_DAR:
On 21.09.2012, at 07:33, Paul Mackerras wrote:
The PAPR paravirtualization interface lets guests register three
different types of per-vCPU buffer areas in its memory for communication
with the hypervisor. These are called virtual processor areas (VPAs).
Currently the hypercalls to register
On 21.09.2012, at 07:35, Paul Mackerras wrote:
When a Book3S HV KVM guest is running, we need the host to be in
single-thread mode, that is, all of the cores (or at least all of
the cores where the KVM guest could run) to be running only one
active hardware thread. This is because of the
On 21.09.2012, at 07:36, Paul Mackerras wrote:
When making a vcpu non-runnable we incorrectly changed the
thread IDs of all other threads on the core, just remove that
code.
Signed-off-by: Benjamin Herrenschmidt b...@kernel.crashing.org
Signed-off-by: Paul Mackerras pau...@samba.org
1 - 100 of 105 matches
Mail list logo