Re: [PATCHv2/RFC] kvm/irqchip: Speed up KVM_SET_GSI_ROUTING

2014-02-20 Thread Andrew Theurer
system than rcu_synchronize_expedited. This might give Paolo a hint which of the patches is the right way to go. Hi all, I've asked Andrew Theurer to run network tests on a 10G connection (TCP request/response to check for performance, TCP streaming for host CPU utilization). I am hoping

Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks

2013-06-26 Thread Andrew Theurer
On Wed, 2013-06-26 at 15:52 +0300, Gleb Natapov wrote: On Wed, Jun 26, 2013 at 01:37:45PM +0200, Andrew Jones wrote: On Wed, Jun 26, 2013 at 02:15:26PM +0530, Raghavendra K T wrote: On 06/25/2013 08:20 PM, Andrew Theurer wrote: On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote

Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks

2013-06-25 Thread Andrew Theurer
On Sun, 2013-06-02 at 00:51 +0530, Raghavendra K T wrote: This series replaces the existing paravirtualized spinlock mechanism with a paravirtualized ticketlock mechanism. The series provides implementation for both Xen and KVM. Changes in V9: - Changed spin_threshold to 32k to avoid excess

Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks

2013-06-07 Thread Andrew Theurer
and share the patches I tried. -Andrew Theurer -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Preemptable Ticket Spinlock

2013-04-26 Thread Andrew Theurer
dbench in tmpfs, which is a pretty good test for spinlock preempt problems. I had PLE enabled for the test. When you re-base your patches I will try it again. Thanks, -Andrew Theurer -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord

Re: [PATCH V3 RFC 1/2] sched: Bail out of yield_to when source and target runqueue has one task

2012-11-28 Thread Andrew Theurer
the latest throttled yield_to() patch (the one Vinod tested). Signed-off-by: Andrew Theurer haban...@linux.vnet.ibm.com diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index ecc5543..61d12ea 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -192,6 +192,7 @@ struct

Re: [PATCH V3 RFC 1/2] sched: Bail out of yield_to when source and target runqueue has one task

2012-11-27 Thread Andrew Theurer
On Tue, 2012-11-27 at 16:00 +0530, Raghavendra K T wrote: On 11/26/2012 07:05 PM, Andrew Jones wrote: On Mon, Nov 26, 2012 at 05:37:54PM +0530, Raghavendra K T wrote: From: Peter Zijlstra pet...@infradead.org In case of undercomitted scenarios, especially in large guests yield_to

Re: [PATCH V2 RFC 0/3] kvm: Improving undercommit,overcommit scenarios

2012-10-30 Thread Andrew Theurer
and suggestions. Link for V1: https://lkml.org/lkml/2012/9/21/168 kernel/sched/core.c | 25 +++-- virt/kvm/kvm_main.c | 56 ++-- 2 files changed, 65 insertions(+), 16 deletions(-) -Andrew Theurer -- To unsubscribe

Re: [PATCH RFC 1/2] kvm: Handle undercommitted guest case in PLE handler

2012-10-19 Thread Andrew Theurer
On Fri, 2012-10-19 at 14:00 +0530, Raghavendra K T wrote: On 10/15/2012 08:04 PM, Andrew Theurer wrote: On Mon, 2012-10-15 at 17:40 +0530, Raghavendra K T wrote: On 10/11/2012 01:06 AM, Andrew Theurer wrote: On Wed, 2012-10-10 at 23:24 +0530, Raghavendra K T wrote: On 10/10/2012 08:29 AM

Re: [PATCH RFC 1/2] kvm: Handle undercommitted guest case in PLE handler

2012-10-15 Thread Andrew Theurer
On Mon, 2012-10-15 at 17:40 +0530, Raghavendra K T wrote: On 10/11/2012 01:06 AM, Andrew Theurer wrote: On Wed, 2012-10-10 at 23:24 +0530, Raghavendra K T wrote: On 10/10/2012 08:29 AM, Andrew Theurer wrote: On Wed, 2012-10-10 at 00:21 +0530, Raghavendra K T wrote: * Avi Kivity

Re: [PATCH RFC 1/2] kvm: Handle undercommitted guest case in PLE handler

2012-10-10 Thread Andrew Theurer
On Wed, 2012-10-10 at 23:13 +0530, Raghavendra K T wrote: On 10/10/2012 07:54 PM, Andrew Theurer wrote: I ran 'perf sched map' on the dbench workload for medium and large VMs, and I thought I would share some of the results. I think it helps to visualize what's going on regarding

Re: [PATCH RFC 1/2] kvm: Handle undercommitted guest case in PLE handler

2012-10-10 Thread Andrew Theurer
On Wed, 2012-10-10 at 23:24 +0530, Raghavendra K T wrote: On 10/10/2012 08:29 AM, Andrew Theurer wrote: On Wed, 2012-10-10 at 00:21 +0530, Raghavendra K T wrote: * Avi Kivity a...@redhat.com [2012-10-04 17:00:28]: On 10/04/2012 03:07 PM, Peter Zijlstra wrote: On Thu, 2012-10-04 at 14:41

Re: [PATCH RFC 1/2] kvm: Handle undercommitted guest case in PLE handler

2012-10-10 Thread Andrew Theurer
I ran 'perf sched map' on the dbench workload for medium and large VMs, and I thought I would share some of the results. I think it helps to visualize what's going on regarding the yielding. These files are png bitmaps, generated from processing output from 'perf sched map' (and perf data

Re: [PATCH RFC 1/2] kvm: Handle undercommitted guest case in PLE handler

2012-10-09 Thread Andrew Theurer
On Wed, 2012-10-10 at 00:21 +0530, Raghavendra K T wrote: * Avi Kivity a...@redhat.com [2012-10-04 17:00:28]: On 10/04/2012 03:07 PM, Peter Zijlstra wrote: On Thu, 2012-10-04 at 14:41 +0200, Avi Kivity wrote: Again the numbers are ridiculously high for arch_local_irq_restore.

Re: [PATCH RFC 1/2] kvm: Handle undercommitted guest case in PLE handler

2012-10-04 Thread Andrew Theurer
On Thu, 2012-10-04 at 14:41 +0200, Avi Kivity wrote: On 10/04/2012 12:49 PM, Raghavendra K T wrote: On 10/03/2012 10:35 PM, Avi Kivity wrote: On 10/03/2012 02:22 PM, Raghavendra K T wrote: So I think it's worth trying again with ple_window of 2-4. Hi Avi, I ran different

Re: [PATCH RFC 0/2] kvm: Improving undercommit,overcommit scenarios in PLE handler

2012-09-28 Thread Andrew Theurer
On Fri, 2012-09-28 at 11:08 +0530, Raghavendra K T wrote: On 09/27/2012 05:33 PM, Avi Kivity wrote: On 09/27/2012 01:23 PM, Raghavendra K T wrote: This gives us a good case for tracking preemption on a per-vm basis. As long as we aren't preempted, we can keep the PLE window high, and

Re: [PATCH RFC 0/2] kvm: Improving undercommit,overcommit scenarios in PLE handler

2012-09-27 Thread Andrew Theurer
and then others. Or were you referring to something else? So looking back at threads/ discussions so far, I am trying to summarize, the discussions so far. I feel, at least here are the few potential candidates to go in: 1) Avoiding double runqueue lock overhead (Andrew Theurer

Re: [RFC][PATCH] Improving directed yield scalability for PLE handler

2012-09-17 Thread Andrew Theurer
On Sun, 2012-09-16 at 11:55 +0300, Avi Kivity wrote: On 09/14/2012 12:30 AM, Andrew Theurer wrote: The concern I have is that even though we have gone through changes to help reduce the candidate vcpus we yield to, we still have a very poor idea of which vcpu really needs to run

Re: [RFC][PATCH] Improving directed yield scalability for PLE handler

2012-09-13 Thread Andrew Theurer
On Thu, 2012-09-13 at 17:18 +0530, Raghavendra K T wrote: * Andrew Theurer haban...@linux.vnet.ibm.com [2012-09-11 13:27:41]: On Tue, 2012-09-11 at 11:38 +0530, Raghavendra K T wrote: On 09/11/2012 01:42 AM, Andrew Theurer wrote: On Mon, 2012-09-10 at 19:12 +0200, Peter Zijlstra wrote

Re: [RFC][PATCH] Improving directed yield scalability for PLE handler

2012-09-11 Thread Andrew Theurer
On Tue, 2012-09-11 at 11:38 +0530, Raghavendra K T wrote: On 09/11/2012 01:42 AM, Andrew Theurer wrote: On Mon, 2012-09-10 at 19:12 +0200, Peter Zijlstra wrote: On Mon, 2012-09-10 at 22:26 +0530, Srikar Dronamraju wrote: +static bool __yield_to_candidate(struct task_struct *curr, struct

Re: [RFC][PATCH] Improving directed yield scalability for PLE handler

2012-09-11 Thread Andrew Theurer
On Tue, 2012-09-11 at 11:38 +0530, Raghavendra K T wrote: On 09/11/2012 01:42 AM, Andrew Theurer wrote: On Mon, 2012-09-10 at 19:12 +0200, Peter Zijlstra wrote: On Mon, 2012-09-10 at 22:26 +0530, Srikar Dronamraju wrote: +static bool __yield_to_candidate(struct task_struct *curr, struct

Re: [RFC][PATCH] Improving directed yield scalability for PLE handler

2012-09-10 Thread Andrew Theurer
On Sat, 2012-09-08 at 14:13 +0530, Srikar Dronamraju wrote: signed-off-by: Andrew Theurer haban...@linux.vnet.ibm.com diff --git a/kernel/sched/core.c b/kernel/sched/core.c index fbf1fd0..c767915 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4844,6 +4844,9

Re: [RFC][PATCH] Improving directed yield scalability for PLE handler

2012-09-10 Thread Andrew Theurer
On Mon, 2012-09-10 at 19:12 +0200, Peter Zijlstra wrote: On Mon, 2012-09-10 at 22:26 +0530, Srikar Dronamraju wrote: +static bool __yield_to_candidate(struct task_struct *curr, struct task_struct *p) +{ + if (!curr-sched_class-yield_to_task) + return false; +

[RFC][PATCH] Improving directed yield scalability for PLE handler

2012-09-07 Thread Andrew Theurer
is: given a runqueue, what's the best way to check if that corresponding phys cpu is not in guest mode? Here's the changes so far (schedstat changes not included here): signed-off-by: Andrew Theurer haban...@linux.vnet.ibm.com diff --git a/kernel/sched/core.c b/kernel/sched/core.c index

Re: [RFC][PATCH] Improving directed yield scalability for PLE handler

2012-09-07 Thread Andrew Theurer
On Fri, 2012-09-07 at 23:36 +0530, Raghavendra K T wrote: CCing PeterZ also. On 09/07/2012 06:41 PM, Andrew Theurer wrote: I have noticed recently that PLE/yield_to() is still not that scalable for really large guests, sometimes even with no CPU over-commit. I have a small change

pagemapscan-numa: find out where your mulit-node VM's memory is (and a question)

2012-07-13 Thread Andrew Theurer
related tests. Thanks, -Andrew Theurer /* pagemapscan-numa.c v0.01 * * Copyright (c) 2012 IBM * * Author: Andrew Theurer * * This software is licensed to you under the GNU General Public License, * version 2 (GPLv2). There is NO WARRANTY for this software, express or * implied, including

Re: [PATCH RFC 0/2] kvm: Improving directed yield in PLE handler

2012-07-10 Thread Andrew Theurer
On Tue, 2012-07-10 at 17:24 +0530, Raghavendra K T wrote: On 07/10/2012 03:17 AM, Andrew Theurer wrote: On Mon, 2012-07-09 at 11:50 +0530, Raghavendra K T wrote: Currently Pause Looop Exit (PLE) handler is doing directed yield to a random VCPU on PL exit. Though we already have filtering

Re: [PATCH RFC 0/2] kvm: Improving directed yield in PLE handler

2012-07-09 Thread Andrew Theurer
spent in host in the double runqueue lock for yield_to(), so that's why I still gravitate toward that issue. -Andrew Theurer -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo

Re: [PATCH] add PLE stats to kvmstat

2012-07-06 Thread Andrew Theurer
On Fri, 2012-07-06 at 15:42 +0800, Xiao Guangrong wrote: On 07/06/2012 05:50 AM, Andrew Theurer wrote: I, and I expect others, have a keen interest in knowing how often we exit for PLE, and also how often that includes a yielding to another vcpu. The following adds two more counters

Re: [PATCH] add PLE stats to kvmstat

2012-07-06 Thread Andrew Theurer
On Sat, 2012-07-07 at 01:40 +0800, Xiao Guangrong wrote: On 07/06/2012 09:22 PM, Andrew Theurer wrote: On Fri, 2012-07-06 at 15:42 +0800, Xiao Guangrong wrote: On 07/06/2012 05:50 AM, Andrew Theurer wrote: I, and I expect others, have a keen interest in knowing how often we exit for PLE

Re: [PATCH] kvm: handle last_boosted_vcpu = 0 case

2012-07-05 Thread Andrew Theurer
On Mon, 2012-07-02 at 10:49 -0400, Rik van Riel wrote: On 06/28/2012 06:55 PM, Vinod, Chegu wrote: Hello, I am just catching up on this email thread... Perhaps one of you may be able to help answer this query.. preferably along with some data. [BTW, I do understand the basic intent

[PATCH] add PLE stats to kvmstat

2012-07-05 Thread Andrew Theurer
going on. -Andrew Theurer Signed-off-by: Andrew Theurer haban...@linux.vnet.ibm.com arch/x86/include/asm/kvm_host.h |2 ++ arch/x86/kvm/svm.c |1 + arch/x86/kvm/vmx.c |1 + arch/x86/kvm/x86.c |2 ++ virt/kvm/kvm_main.c |1

Re: [RFC:kvm] export host NUMA info to guest make emulated device NUMA attr

2012-05-23 Thread Andrew Theurer
, the vcpus could be scheduled on different nodes. Someone is working on in-kernel solution. Andrew Theurer has a working user-space NUMA aware VM balancer, it requires libvirt and cgroups (which is default for RHEL6 systems). Interesting, and I found that sched/numa: Introduce sys_numa_{t,m}bind

gettimeofday() vsyscall for kvm-clock?

2012-05-21 Thread Andrew Theurer
on a 16 thread 2S Nehalem-EP host, looped gettimeofday() calls on all vCPUs) tsc:.0645 usec per call kvm-clock: .4222 usec per call (6.54x) -Andrew Theurer -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More

Re: gettimeofday() vsyscall for kvm-clock?

2012-05-21 Thread Andrew Theurer
On 05/21/2012 03:36 PM, Marcelo Tosatti wrote: On Mon, May 21, 2012 at 03:26:54PM -0500, Andrew Theurer wrote: Wondering if a user-space gettimofday() for kvm-clock has been considered before. I am seeing a pretty large difference in performance between tsc and kvm-clock. I have to assume

Re: perf stat to collect the performance statistics of KVM process

2012-05-14 Thread Andrew Theurer
,) 2 perftest [1] 15086 [root@dell06 ~]# kill 15086 [root@dell06 ~]# [1]+ Terminated ( perf stat -p 7473 -x , ) 2 perftest [root@dell06 ~]# cat perftest [root@dell06 ~]# Any clue? Can you please try kill -s INT pid Best Regards Hailong -Andrew Theurer -- To unsubscribe from

Re: performance of virtual functions compared to virtio

2011-04-25 Thread Andrew Theurer
On Mon, 2011-04-25 at 13:49 -0600, David Ahern wrote: On 04/25/11 13:29, Alex Williamson wrote: So we're effectively getting host-host latency/throughput for the VF, it's just that in the 82576 implementation of SR-IOV, the VF takes a latency hit that puts it pretty close to virtio.

Re: Network performance with small packets

2011-03-08 Thread Andrew Theurer
On Tue, 2011-03-08 at 13:57 -0800, Shirley Ma wrote: On Wed, 2011-02-09 at 11:07 +1030, Rusty Russell wrote: I've finally read this thread... I think we need to get more serious with our stats gathering to diagnose these kind of performance issues. This is a start; it should tell us what

Re: [PATCH 0/3] [RFC] Implement multiqueue (RX TX) virtio-net

2011-03-03 Thread Andrew Theurer
On Mon, 2011-02-28 at 12:04 +0530, Krishna Kumar wrote: This patch series is a continuation of an earlier one that implemented guest MQ TX functionality. This new patchset implements both RX and TX MQ. Qemu changes are not being included at this time solely to aid in easier review.

Re: [PATCH 4/4] NUMA: realize NUMA memory pinning

2010-08-31 Thread Andrew Theurer
to QEMU that let's us work with existing tooling instead of inventing new interfaces. Regards, Anthony Liguori Regards, Andre. -Andrew Theurer -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info

Re: [PATCH 4/4] NUMA: realize NUMA memory pinning

2010-08-31 Thread Andrew Theurer
On Tue, 2010-08-31 at 17:03 -0500, Anthony Liguori wrote: On 08/31/2010 03:54 PM, Andrew Theurer wrote: On Mon, 2010-08-23 at 16:27 -0500, Anthony Liguori wrote: On 08/23/2010 04:16 PM, Andre Przywara wrote: Anthony Liguori wrote: On 08/23/2010 01:59 PM, Marcelo

windows workload: many ept_violation and mmio exits

2009-12-03 Thread Andrew Theurer
I am running a windows workload which has 26 windows VMs running many instances of a J2EE workload. There are 13 pairs of an application server VM and database server VM. There seem to be quite a bit of vm_exits, and it looks over a third of them are mmio_exit: efer_relo 0 exits

Re: kernel bug in kvm_intel

2009-11-30 Thread Andrew Theurer
On Sun, 2009-11-29 at 16:46 +0200, Avi Kivity wrote: On 11/26/2009 03:35 AM, Andrew Theurer wrote: I just tried testing tip of kvm.git, but unfortunately I think I might be hitting a different problem, where processes run 100% in kernel mode. In my case, cpus 9 and 13 were stuck, running

Re: kernel bug in kvm_intel

2009-11-25 Thread Andrew Theurer
Tejun Heo wrote: Hello, 11/01/2009 08:31 PM, Avi Kivity wrote: Here is the code in question: 3ae7: 75 05 jne 3aeevmx_vcpu_run+0x26a 3ae9: 0f 01 c2vmlaunch 3aec: eb 03 jmp

Re: kernel bug in kvm_intel

2009-10-31 Thread Andrew Theurer
Avi Kivity wrote: On 10/30/2009 08:07 PM, Andrew Theurer wrote: I have finally bisected and isolated this to the following commit: ada3fa15057205b7d3f727bba5cd26b5912e350f http://git.kernel.org/?p=virt/kvm/kvm.git;a=commit;h=ada3fa15057205b7d3f727bba5cd26b5912e350f Merge branch

Re: kernel bug in kvm_intel

2009-10-30 Thread Andrew Theurer
On Thu, 2009-10-15 at 15:18 -0500, Andrew Theurer wrote: On Thu, 2009-10-15 at 02:10 +0900, Avi Kivity wrote: On 10/13/2009 11:04 PM, Andrew Theurer wrote: Look at the address where vmx_vcpu_run starts, add 0x26d, and show the surrounding code. Thinking about it, it probably _is_

Re: [PATCH 1/3] introduce VMSTATE_U64

2009-10-27 Thread Andrew Theurer
On Tue, Oct 20, 2009 at 08:40:26AM +0900, Avi Kivity wrote: On 10/17/2009 04:27 AM, Glauber Costa wrote: This is a patch actually written by Juan, which, according to him, he plans on posting to qemu.git. Problem is that linux defines u64 in a way that is type-uncompatible with uint64_t.

Re: kernel bug in kvm_intel

2009-10-15 Thread Andrew Theurer
On Thu, 2009-10-15 at 02:10 +0900, Avi Kivity wrote: On 10/13/2009 11:04 PM, Andrew Theurer wrote: Look at the address where vmx_vcpu_run starts, add 0x26d, and show the surrounding code. Thinking about it, it probably _is_ what you showed, due to module page alignment. But please

Re: kernel bug in kvm_intel

2009-10-13 Thread Andrew Theurer
On Tue, 2009-10-13 at 08:50 +0200, Avi Kivity wrote: On 10/12/2009 08:42 PM, Andrew Theurer wrote: On Sun, 2009-10-11 at 07:19 +0200, Avi Kivity wrote: On 10/09/2009 10:04 PM, Andrew Theurer wrote: This is on latest master branch on kvm.git and qemu-kvm.git, running 12

Re: kernel bug in kvm_intel

2009-10-12 Thread Andrew Theurer
On Sun, 2009-10-11 at 07:19 +0200, Avi Kivity wrote: On 10/09/2009 10:04 PM, Andrew Theurer wrote: This is on latest master branch on kvm.git and qemu-kvm.git, running 12 Windows Server2008 VMs, and using oprofile. I ran again without oprofile and did not get the BUG. I am wondering

kernel bug in kvm_intel

2009-10-09 Thread Andrew Theurer
This is on latest master branch on kvm.git and qemu-kvm.git, running 12 Windows Server2008 VMs, and using oprofile. I ran again without oprofile and did not get the BUG. I am wondering if anyone else is seeing this. Thanks, -Andrew Oct 9 11:55:13 virtvictory-eth0 kernel: BUG: unable to

Re: kvm scaling question

2009-09-15 Thread Andrew Theurer
On Mon, 2009-09-14 at 17:19 -0600, Bruce Rogers wrote: On 9/11/2009 at 3:53 PM, Marcelo Tosatti mtosa...@redhat.com wrote: On Fri, Sep 11, 2009 at 09:36:10AM -0600, Bruce Rogers wrote: I am wondering if anyone has investigated how well kvm scales when supporting many guests, or many vcpus

Re: [PATCH] KVM: Use thread debug register storage instead of kvm specific data

2009-09-04 Thread Andrew Theurer
On Tue, 2009-09-01 at 21:23 +0300, Avi Kivity wrote: On 09/01/2009 09:12 PM, Andrew Theurer wrote: Here's a run from branch debugreg with thread debugreg storage + conditionally reload dr6: user nice system irq softirq guest idle iowait 5.79 0.009.28 0.08 1.00 20.81

Re: [PATCH] KVM: Use thread debug register storage instead of kvm specific data

2009-09-04 Thread Andrew Theurer
Brian Jackson wrote: On Friday 04 September 2009 09:48:17 am Andrew Theurer wrote: snip Still not idle=poll, it may shave off 0.2%. Won't this affect SMT in a negative way? (OK, I am not running SMT now, but eventually we will be) A long time ago, we tested P4's with HT, and a polling idle

Re: [PATCH] KVM: Use thread debug register storage instead of kvm specific data

2009-09-01 Thread Andrew Theurer
On Tue, 2009-09-01 at 12:47 +0300, Avi Kivity wrote: On 09/01/2009 12:44 PM, Avi Kivity wrote: Instead of saving the debug registers from the processor to a kvm data structure, rely in the debug registers stored in the thread structure. This allows us not to save dr6 and dr7. Reduces

Re: [PATCH] don't call adjust_vmx_controls() second time

2009-08-31 Thread Andrew Theurer
Avi Kivity wrote: On 08/27/2009 11:42 PM, Andrew Theurer wrote: On Thu, 2009-08-27 at 19:21 +0300, Avi Kivity wrote: On 08/27/2009 06:41 PM, Gleb Natapov wrote: Don't call adjust_vmx_controls() two times for the same control. It restores options that was dropped earlier

Re: [PATCH] don't call adjust_vmx_controls() second time

2009-08-27 Thread Andrew Theurer
On Thu, 2009-08-27 at 19:21 +0300, Avi Kivity wrote: On 08/27/2009 06:41 PM, Gleb Natapov wrote: Don't call adjust_vmx_controls() two times for the same control. It restores options that was dropped earlier. Applied, thanks. Andrew, if you rerun your benchmark atop kvm.git 'next'

Performace data when running Windows VMs

2009-08-26 Thread Andrew Theurer
I recently gathered some performance data when running Windows Server 2008 VMs, and I wanted to share it here. There are 12 Windows Server2008 64-bit VMs (1 vcpu, 2 GB) running which handle the concurrent execution of 6 J2EE type benchmarks. Each benchmark needs a App VM and a Database VM. The

Re: Performace data when running Windows VMs

2009-08-26 Thread Andrew Theurer
On Wed, 2009-08-26 at 18:44 +0300, Avi Kivity wrote: On 08/26/2009 05:57 PM, Andrew Theurer wrote: I recently gathered some performance data when running Windows Server 2008 VMs, and I wanted to share it here. There are 12 Windows Server2008 64-bit VMs (1 vcpu, 2 GB) running which handle

Re: Performace data when running Windows VMs

2009-08-26 Thread Andrew Theurer
On Wed, 2009-08-26 at 19:26 +0300, Avi Kivity wrote: On 08/26/2009 07:14 PM, Andrew Theurer wrote: On Wed, 2009-08-26 at 18:44 +0300, Avi Kivity wrote: On 08/26/2009 05:57 PM, Andrew Theurer wrote: I recently gathered some performance data when running Windows Server 2008

Re: Performace data when running Windows VMs

2009-08-26 Thread Andrew Theurer
On Wed, 2009-08-26 at 11:27 -0500, Brian Jackson wrote: On Wednesday 26 August 2009 11:14:57 am Andrew Theurer wrote: snip I/O on the host was not what I would call very high: outbound network averaged at 163 Mbit/s inbound was 8 Mbit/s, while disk read ops was 243/sec and write

Re: Windows Server 2008 VM performance

2009-06-03 Thread Andrew Theurer
Avi Kivity wrote: Andrew Theurer wrote: Is there a virtio_block driver to test? There is, but it isn't available yet. OK. Can I assume a better virtio_net driver is in the works as well? Can we find the root cause of the exits (is there a way to get stack dump or something that can

Windows Server 2008 VM performance

2009-06-02 Thread Andrew Theurer
I've been looking at how KVM handles windows guests, and I am a little concerned with the CPU overhead. My test case is as follows: I am running 4 instances of a J2EE benchmark. Each instance needs one application server and one DB server. 8 VMs in total are used. I have the same App and

Re: KVM performance vs. Xen

2009-04-30 Thread Andrew Theurer
Avi Kivity wrote: Andrew Theurer wrote: I wanted to share some performance data for KVM and Xen. I thought it would be interesting to share some performance results especially compared to Xen, using a more complex situation like heterogeneous server consolidation. The Workload: The workload

Re: KVM performance vs. Xen

2009-04-30 Thread Andrew Theurer
Avi Kivity wrote: Andrew Theurer wrote: Avi Kivity wrote: What's the typical I/O load (disk and network bandwidth) while the tests are running? This is average thrgoughput: network:Tx: 79 MB/sec Rx: 5 MB/sec MB as in Byte or Mb as in bit? Byte. There are 4 x 1 Gb adapters, each

Re: KVM performance vs. Xen

2009-04-30 Thread Andrew Theurer
Avi Kivity wrote: Anthony Liguori wrote: Avi Kivity wrote: 1) I'm seeing about 2.3% in scheduler functions [that I recognize]. Does that seems a bit excessive? Yes, it is. If there is a lot of I/O, this might be due to the thread pool used for I/O. This is why I wrote the linux-aio

Re: KVM performance vs. Xen

2009-04-30 Thread Andrew Theurer
Here are the SMT off results. This workload is designed to not over-saturate the CPU, so you have to pick a number of server sets to ensure that. With SMT on, 4 sets was enough for KVM, but 5 was too much (start seeing response time errors). For SMT off, I tried to size the load as high as

KVM performance vs. Xen

2009-04-29 Thread Andrew Theurer
kvm_apic_has_interrupt 880421 0.1399 librt-2.9.so/lib64/librt-2.9.so 880306 0.1399 vmlinux-2.6.27.19-5-default nf_iterate -Andrew Theurer -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord

Re: KVM performance vs. Xen

2009-04-29 Thread Andrew Theurer
Nakajima, Jun wrote: On 4/29/2009 7:41:50 AM, Andrew Theurer wrote: I wanted to share some performance data for KVM and Xen. I thought it would be interesting to share some performance results especially compared to Xen, using a more complex situation like heterogeneous server consolidation

boot problems with if=virtio

2009-04-27 Thread Andrew Theurer
I know there have been a couple other threads here about booting with if=virtio, but I think this might be a different problem, not sure: I am using kvm.git (41b76d8d0487c26d6d4d3fe53c1ff59b3236f096) and qemu-kvm.git (8f7a30dbc40a1d4c09275566f9ed9647ed1ee50f) and linux 2.6.20-rc3 It appears to

Re: patch for virtual machine oriented scheduling(1)

2009-04-23 Thread Andrew Theurer
alex wrote: the following patchs provide an extra control(besides the control of Linux scheduler) over the execution of vcpu threads. In this patch, Xen's credit scheduler(http://wiki.xensource.com/xenwiki/CreditScheduler) is used. User can use cat and echo command to view and control a guest

Re: EPT support breakage on: KVM: VMX: Zero ept module parameter if ept is not present

2009-04-01 Thread Andrew Theurer
Sheng Yang wrote: Oops... Thanks very much for reporting! I can't believe we haven't awared of that... Could you please try the attached patch? Thanks! Tested and works great. Thanks! -Andrew diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index aba41ae..8d6465b 100644 ---

EPT support breakage on: KVM: VMX: Zero ept module parameter if ept is not present

2009-03-31 Thread Andrew Theurer
I cannot get EPT support to work on commit: 21f65ab2c582594a69dcb1484afa9f88b3414b4f KVM: VMX: Zero ept module parameter if ept is not present I see tons of pf_guest from kvm_stat, where as the previous commit has none. I am using ept=1 module option for kvm-intel. This is on Nehalem

Re: [PATCH] KVM: Defer remote tlb flushes on invlpg (v3)

2009-03-19 Thread Andrew Theurer
Avi Kivity wrote: KVM currently flushes the tlbs on all cpus when emulating invlpg. This is because at the time of invlpg we lose track of the page, and leaving stale tlb entries could cause the guest to access the page when it is later freed (say after being swapped out). However we have a