On 7/1/2013 10:49 PM, Rusty Russell wrote:
Chegu Vinod chegu_vi...@hp.com writes:
On 6/30/2013 11:22 PM, Rusty Russell wrote:
Chegu Vinod chegu_vi...@hp.com writes:
Hello,
Lots (~700+) of the following messages are showing up in the dmesg of a
3.10-rc1 based kernel (Host OS is running
On 6/30/2013 11:22 PM, Rusty Russell wrote:
Chegu Vinod chegu_vi...@hp.com writes:
Hello,
Lots (~700+) of the following messages are showing up in the dmesg of a
3.10-rc1 based kernel (Host OS is running on a large socket count box
with HT-on).
[ 82.270682] PERCPU: allocation failed, size
Hello,
Lots (~700+) of the following messages are showing up in the dmesg of a
3.10-rc1 based kernel (Host OS is running on a large socket count box
with HT-on).
[ 82.270682] PERCPU: allocation failed, size=42 align=16, alloc from
reserved chunk failed
[ 82.272633] kvm_intel: Could
-by: Chegu Vinod chegu_vi...@hp.com
I was able to verify your changes on a 2 Sandybridge-EP socket platform
and observed about ~7-8% improvement in the netperf's TCP_RR
performance. The guest size was small (16vcpu/32GB).
Hopefully these changes also have an indirect benefit of avoiding soft
://article.gmane.org/gmane.comp.emulators.kvm.devel/99713 )
Signed-off-by: Chegu Vinod chegu_vi...@hp.com
---
arch/x86/include/asm/kvm_host.h |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 4979778
On 4/22/2013 1:50 PM, Jiannan Ouyang wrote:
On Mon, Apr 22, 2013 at 4:44 PM, Peter Zijlstra pet...@infradead.org wrote:
On Mon, 2013-04-22 at 16:32 -0400, Rik van Riel wrote:
IIRC one of the reasons was that the performance improvement wasn't
as obvious. Rescheduling VCPUs takes a fair amount
(pn);
+ if (current-state == TASK_RUNNING)
+ vcpu-preempted = true;
kvm_arch_vcpu_put(vcpu);
}
.
Reviewed-by: Chegu Vinod chegu_vi...@hp.com
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More
-preempted))
+ continue;
if (vcpu == me)
continue;
if (waitqueue_active(vcpu-wq))
.
Reviewed-by: Chegu Vinod chegu_vi...@hp.com
--
To unsubscribe from this list: send the line unsubscribe
Zhang, Yang Z yang.z.zhang at intel.com writes:
Marcelo Tosatti wrote on 2013-01-24:
On Wed, Jan 23, 2013 at 10:47:23PM +0800, Yang Zhang wrote:
From: Yang Zhang yang.z.zhang at Intel.com
APIC virtualization is a new feature which can eliminate most of VM exit
when vcpu handle a
Hello,
'am running into an issue with the latest bits. [ Pl. see below. The
vhost thread seems to be getting
stuck while trying to memcopy...perhaps a bad address?. ] Wondering if
this is a known issue or
some recent regression ?
'am using the latest qemu (from qemu.git) and the latest
On 1/9/2013 8:35 PM, Jason Wang wrote:
On 01/10/2013 04:25 AM, Chegu Vinod wrote:
Hello,
'am running into an issue with the latest bits. [ Pl. see below. The
vhost thread seems to be getting
stuck while trying to memcopy...perhaps a bad address?. ] Wondering
if this is a known issue or
some
(-)
.
Tested-by: Chegu Vinod chegu_vi...@hp.com
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 11/28/2012 5:09 PM, Chegu Vinod wrote:
On 11/27/2012 6:23 AM, Chegu Vinod wrote:
On 11/27/2012 2:30 AM, Raghavendra K T wrote:
On 11/26/2012 07:05 PM, Andrew Jones wrote:
On Mon, Nov 26, 2012 at 05:37:54PM +0530, Raghavendra K T wrote:
From: Peter Zijlstra pet...@infradead.org
In case
On 11/27/2012 2:30 AM, Raghavendra K T wrote:
On 11/26/2012 07:05 PM, Andrew Jones wrote:
On Mon, Nov 26, 2012 at 05:37:54PM +0530, Raghavendra K T wrote:
From: Peter Zijlstra pet...@infradead.org
In case of undercomitted scenarios, especially in large guests
yield_to overhead is
);
}
apic-base_address = apic-vcpu-arch.apic_base
--
Gleb.
.
Reviewed-by: Chegu Vinod chegu_vi...@hp.com
Tested-by: Chegu Vinod chegu_vi...@hp.com
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
On 10/14/2012 2:08 AM, Gleb Natapov wrote:
On Sat, Oct 13, 2012 at 10:32:13PM -0400, Sasha Levin wrote:
On 10/13/2012 06:29 PM, Chegu Vinod wrote:
Hello,
Wanted to get a clarification about KVM_MAX_VCPUS(currently set to 254)
in kvm_host.h file. The kvm_vcpu *vcpus array is sized based
Hello,
Wanted to get a clarification about KVM_MAX_VCPUS(currently set to 254)
in kvm_host.h file. The kvm_vcpu *vcpus array is sized based on KVM_MAX_VCPUS.
(i.e. a max of 254 elements in the array).
An 8bit APIC id should allow for 256 ID's. Reserving one for Broadcast should
leave 255
On 9/21/2012 4:59 AM, Raghavendra K T wrote:
In some special scenarios like #vcpu = #pcpu, PLE handler may
prove very costly,
Yes.
because there is no need to iterate over vcpus
and do unsuccessful yield_to burning CPU.
An idea to solve this is:
1) As Avi had proposed we can modify
59
node 5 size: 65536 MB
node 6 cpus: 60 61 62 63 64 65 66 67 68 69
node 6 size: 65536 MB
node 7 cpus: 70 71 72 73 74 75 76 77 78 79
Signed-off-by: Chegu Vinod chegu_vi...@hp.com, Jim Hull jim.h...@hp.com,
Craig Hada craig.h...@hp.com
---
cpus.c |3 ++-
hw/pc.c |3 ++-
sysemu.h |3
4 size: 65536 MB
node 5 cpus: 50 51 52 53 54 55 56 57 58 59
node 5 size: 65536 MB
node 6 cpus: 60 61 62 63 64 65 66 67 68 69
node 6 size: 65536 MB
node 7 cpus: 70 71 72 73 74 75 76 77 78 79
Signed-off-by: Chegu Vinod chegu_vi...@hp.com, Jim Hull jim.h...@hp.com,
Craig Hada craig.h...@hp.com
Tested
Hello,
Wanted to share some preliminary data from live migration experiments on a
setup
that is perhaps one of the larger ones.
We used Juan's huge_memory patches (without the separate migration thread)
and
measured the total migration time and the time taken for stage 3 (downtime).
Note:
cpus: 60 61 62 63 64 65 66 67 68 69
node 6 size: 65536 MB
node 7 cpus: 70 71 72 73 74 75 76 77 78 79
Signed-off-by: Chegu Vinod chegu_vi...@hp.com, Jim Hull jim.h...@hp.com,
Craig Hada craig.h...@hp.com
---
cpus.c |3 ++-
hw/pc.c |4 +++-
sysemu.h |3 ++-
vl.c | 48
On 6/18/2012 1:29 PM, Eduardo Habkost wrote:
On Sun, Jun 17, 2012 at 01:12:31PM -0700, Chegu Vinod wrote:
The -numa option to qemu is used to create [fake] numa nodes
and expose them to the guest OS instance.
There are a couple of issues with the -numa option:
a) Max VCPU's that can
On 6/18/2012 3:11 PM, Eric Blake wrote:
On 06/18/2012 04:05 PM, Andreas Färber wrote:
Am 17.06.2012 22:12, schrieb Chegu Vinod:
diff --git a/vl.c b/vl.c
index 204d85b..1906412 100644
--- a/vl.c
+++ b/vl.c
@@ -28,6 +28,7 @@
#includeerrno.h
#includesys/time.h
#includezlib.h
node 5 cpus: 50 51 52 53 54 55 56 57 58 59
node 5 size: 65536 MB
node 6 cpus: 60 61 62 63 64 65 66 67 68 69
node 6 size: 65536 MB
node 7 cpus: 70 71 72 73 74 75 76 77 78 79
node 7 size: 65536 MB
Signed-off-by: Chegu Vinod chegu_vi...@hp.com, Jim Hull jim.h...@hp.com,
Craig Hada craig.h...@hp.com
On 6/8/2012 11:37 AM, Jan Kiszka wrote:
On 2012-06-08 20:20, Chegu Vinod wrote:
On 6/8/2012 11:08 AM, Jan Kiszka wrote:
[CC'ing qemu as this discusses its code base]
On 2012-06-08 19:57, Chegu Vinod wrote:
On 6/8/2012 10:42 AM, Alex Williamson wrote:
On Fri, 2012-06-08 at 10:10 -0700, Chegu
On 6/12/2012 8:39 AM, Gleb Natapov wrote:
On Tue, Jun 12, 2012 at 08:33:59AM -0700, Chegu Vinod wrote:
I rebuilt the 3.4.1 kernel in the guest from scratch and retried my
experiments and measured
the boot times...
a) Host : RHEL6.3 RC1 + qemu-kvm (that came with it) Guest :
RHEL6.3 RC1: ~1
On 6/10/2012 2:30 AM, Gleb Natapov wrote:
On Fri, Jun 08, 2012 at 11:20:53AM -0700, Chegu Vinod wrote:
On 6/8/2012 11:08 AM, Jan Kiszka wrote:
BTW, another data point ...if I try to boot a the RHEL6.3 kernel in
the guest (with the latest qemu.git and the 3.4.1 on the host) it
boots just fine
Hello,
I picked up a recent version of the qemu (1.0.92 with some fixes) and tried it
on x86_64 server (with host and the guest running 3.4.1 kernel).
While trying to boot a large guest (80 vcpus + 512GB) I observed that the guest
took for ever to boot up... ~1 hr or even more. [This wasn't
On 6/8/2012 9:46 AM, Alex Williamson wrote:
On Fri, 2012-06-08 at 16:29 +, Chegu Vinod wrote:
Hello,
I picked up a recent version of the qemu (1.0.92 with some fixes) and tried it
on x86_64 server (with host and the guest running 3.4.1 kernel).
BTW, I observe the same thing if i were
On 6/8/2012 10:10 AM, Chegu Vinod wrote:
On 6/8/2012 9:46 AM, Alex Williamson wrote:
On Fri, 2012-06-08 at 16:29 +, Chegu Vinod wrote:
Hello,
I picked up a recent version of the qemu (1.0.92 with some fixes)
and tried it
on x86_64 server (with host and the guest running 3.4.1 kernel
On 6/8/2012 10:42 AM, Alex Williamson wrote:
On Fri, 2012-06-08 at 10:10 -0700, Chegu Vinod wrote:
On 6/8/2012 9:46 AM, Alex Williamson wrote:
On Fri, 2012-06-08 at 16:29 +, Chegu Vinod wrote:
Hello,
I picked up a recent version of the qemu (1.0.92 with some fixes) and tried it
on x86_64
On 6/8/2012 11:08 AM, Jan Kiszka wrote:
[CC'ing qemu as this discusses its code base]
On 2012-06-08 19:57, Chegu Vinod wrote:
On 6/8/2012 10:42 AM, Alex Williamson wrote:
On Fri, 2012-06-08 at 10:10 -0700, Chegu Vinod wrote:
On 6/8/2012 9:46 AM, Alex Williamson wrote:
On Fri, 2012-06-08
On 6/4/2012 6:13 AM, Isaku Yamahata wrote:
On Mon, Jun 04, 2012 at 05:01:30AM -0700, Chegu Vinod wrote:
Hello Isaku Yamahata,
Hi.
I just saw your patches..Would it be possible to email me a tar bundle of these
patches (makes it easier to apply the patches to a copy of the upstream
qemu.git
Chegu Vinod chegu_vinod at hp.com writes:
Andrew Theurer habanero at linux.vnet.ibm.com writes:
Regarding the -numa option :
I had earlier (about a ~month ago) tried the -numa option. The layout I
specified didn't match the layout the guest saw. Haven't yet looked into the
exact
Andrew Theurer habanero at linux.vnet.ibm.com writes:
On 05/09/2012 08:46 AM, Avi Kivity wrote:
On 05/09/2012 04:05 PM, Chegu Vinod wrote:
Hello,
On an 8 socket Westmere host I am attempting to run a single guest and
characterize the virtualization overhead for a system intensive
Hello,
On an 8 socket Westmere host I am attempting to run a single guest and
characterize the virtualization overhead for a system intensive
workload (AIM7-high_systime) as the size of the guest scales (10way/64G,
20way/128G, ... 80way/512G).
To do some comparisons between the native vs.
On 5/9/2012 6:46 AM, Avi Kivity wrote:
On 05/09/2012 04:05 PM, Chegu Vinod wrote:
Hello,
On an 8 socket Westmere host I am attempting to run a single guest and
characterize the virtualization overhead for a system intensive
workload (AIM7-high_systime) as the size of the guest scales (10way
On 4/18/2012 10:43 PM, Gleb Natapov wrote:
On Thu, Apr 19, 2012 at 03:53:39AM +, Chegu Vinod wrote:
Hello,
Perhaps this query was answered in the past. If yes kindly point me to
the same.
We noticed differences in networking performance (measured via netperf
over a 10G NIC) on an X86_64
Nadav Har'El nyh at math.technion.ac.il writes:
On Fri, Apr 20, 2012, Chegu Vinod wrote about Re: Networking performance on
a
KVM Host (with no guests):
Removing the intel_iommu=on boot time parameter in the Config 1
case seemed to help
intel_iommu=on is essential with you're mostly
Hello,
Perhaps this query was answered in the past. If yes kindly point me to
the same.
We noticed differences in networking performance (measured via netperf
over a 10G NIC) on an X86_64 server between the following two
configurations :
1) Server run as a KVM Host (but with no KVM guests
On 4/17/2012 6:25 AM, Chegu Vinod wrote:
On 4/17/2012 2:49 AM, Gleb Natapov wrote:
On Mon, Apr 16, 2012 at 07:44:39AM -0700, Chegu Vinod wrote:
On 4/16/2012 5:18 AM, Gleb Natapov wrote:
On Thu, Apr 12, 2012 at 02:21:06PM -0400, Rik van Riel wrote:
On 04/11/2012 01:21 PM, Chegu Vinod wrote
On 4/17/2012 2:49 AM, Gleb Natapov wrote:
On Mon, Apr 16, 2012 at 07:44:39AM -0700, Chegu Vinod wrote:
On 4/16/2012 5:18 AM, Gleb Natapov wrote:
On Thu, Apr 12, 2012 at 02:21:06PM -0400, Rik van Riel wrote:
On 04/11/2012 01:21 PM, Chegu Vinod wrote:
Hello,
While running an AIM7
On 4/16/2012 5:18 AM, Gleb Natapov wrote:
On Thu, Apr 12, 2012 at 02:21:06PM -0400, Rik van Riel wrote:
On 04/11/2012 01:21 PM, Chegu Vinod wrote:
Hello,
While running an AIM7 (workfile.high_systime) in a single 40-way (or a single
60-way KVM guest) I noticed pretty bad performance when
Rik van Riel riel at redhat.com writes:
On 04/11/2012 01:21 PM, Chegu Vinod wrote:
Hello,
While running an AIM7 (workfile.high_systime) in a single 40-way (or a
single
60-way KVM guest) I noticed pretty bad performance when the guest was booted
with 3.3.1 kernel when compared
Hello,
While running an AIM7 (workfile.high_systime) in a single 40-way (or a single
60-way KVM guest) I noticed pretty bad performance when the guest was booted
with 3.3.1 kernel when compared to the same guest booted with 2.6.32-220
(RHEL6.2) kernel.
'am still trying to dig more into the
46 matches
Mail list logo