014 x86_64 x86_64 x86_64 GNU/Linux
Command to run kvm: kvm ~/disks/ubuntu-natty.qcow2 -nographic
Commands to oprofile:
sudo operf -s --separate-thread --event=CPU_CLK_UNHALTED:500
--vmlinux=/home/xtong/xtong-kernel/vmlinux
sudo opreport --image-path /home/xtong/xtong-kernel -l -g -d -
Il 03/02/2014 18:06, Xin Tong ha scritto:
/.../qemu-system-x86_64 TID 2537 [TID 2537] (877 ticks/71.24%)
This is the CPU thread (calls into the KVM modules).
/.../vmlinux (395 ticks/45.04%)
/kvm (198 ticks/22.58%)
/kvm_intel (153 ticks/17.45%)
/
This is a profile taken on kvm on host. the kvm guest linux is
running SPECCPU2006 403.gcc with reference data set. The commands to
run and report the profile are
sudo operf -s --separate-thread --event=CPU_CLK_UNHALTED:500
--vmlinux=/home/xtong/xtong-kernel/vmlinux
sudo opreport --image-path /
Il 02/02/2014 03:08, Xin Tong ha scritto:
I am getting very weird profile results by running operf on linux on
the host and profiling the a kvm virtual machine running dacapo
eclipse benchmark. I am expecting a lot of time should be spent in
the qemu-system-x86_64 as the instructions from the ec
I am getting very weird profile results by running operf on linux on
the host and profiling the a kvm virtual machine running dacapo
eclipse benchmark. I am expecting a lot of time should be spent in
the qemu-system-x86_64 as the instructions from the eclipse benchmark
would be treated as part of
Hi
I would like to measure the performance of KVM by using hardware
performance counters and I have some questions
1. if i want to get the amount of time spent in instruction and device
emulation, should i use oprofile on the kvm process on the host
machine ?
2. what about amount of time spent in
Hi,
I'm doing extensive performance tests on KVM at the moment. In various
scenarios I'm able to generate a noticeable impact while no performance
metric is really impacted at all. I mean general sluggishness, like
typing laging behind my fingers, commands taking a few seconds to
respond. Nothing d
Hi,
I'm using KVM on Debian, and run a VM with Debian and Firefox.
Unfortunately, Firefox in the VM is *extremely* slow (and maybe
it got slower with newer Firefox-versions, but I'm not sure):
Nearly every time I open a new Firefox-tab or click on link,
CPU load goes to 100% ("system time" accordi
>
>> Other optimizations people are testing out there.
>>
>> - use "nohz=off" in the kernel loading line y menu.lst
>> - Disable Cgroups completely. Using cgclear, and turning off cgred
>> cg-config daemons.
I also tried this option but it did not have a significant effect and
degraded performan
> Other optimizations people are testing out there.
>
> - use "nohz=off" in the kernel loading line y menu.lst
> - Disable Cgroups completely. Using cgclear, and turning off cgred
> cg-config daemons.
>
> And from a Personal point of view, we've always tried to use MySQL in
> a different server fr
Other optimizations people are testing out there.
- use "nohz=off" in the kernel loading line y menu.lst
- Disable Cgroups completely. Using cgclear, and turning off cgred
cg-config daemons.
And from a Personal point of view, we've always tried to use MySQL in
a different server from JBoss.
99% o
>
>> On Thu, Feb 07, 2013 at 04:41:31PM +0100, Erik Brakkee wrote:
>>> Hi,
>>>
>>>
>>> We have been benchmarking a java server application (java 6 update 29)
>>> that requires a mysql database. The scenario is quite simple. We open a
>>> web page which displays a lot of search results. To get the
> The IO scheduler on the host and on the guest is CFS. We also tried with
> deadline scheduler on the host but this did not make any measurable
> difference. We did not try no-op on the host.
I mean of course that we did not try no-op on the guest (not on the host).
--
To unsubscribe from this
> On Thu, Feb 07, 2013 at 04:41:31PM +0100, Erik Brakkee wrote:
>> Hi,
>>
>>
>> We have been benchmarking a java server application (java 6 update 29)
>> that requires a mysql database. The scenario is quite simple. We open a
>> web page which displays a lot of search results. To get the content o
On Thu, Feb 07, 2013 at 04:41:31PM +0100, Erik Brakkee wrote:
> Hi,
>
>
> We have been benchmarking a java server application (java 6 update 29)
> that requires a mysql database. The scenario is quite simple. We open a
> web page which displays a lot of search results. To get the content of the
>
Hi,
We have been benchmarking a java server application (java 6 update 29)
that requires a mysql database. The scenario is quite simple. We open a
web page which displays a lot of search results. To get the content of the
page one big query is done with many smaller queries to retrieve the data.
May 15, 2012 6:48 PM
> > To: kvm@vger.kernel.org
> > Subject: Descriptions about KVM performance counters
> >
> > Dear all,
> >
> > Is there a brief description or any document about the meanings of kvm
> > performance events traced by 'perf' command? S
Anyone please commits or provides any useful insights? Really appreciate~
Hailong
> -Original Message-
> From: Hailong Yang [mailto:hailong.yang1...@gmail.com]
> Sent: Tuesday, May 15, 2012 6:48 PM
> To: kvm@vger.kernel.org
> Subject: Descriptions about KVM perfo
Dear all,
Is there a brief description or any document about the meanings of kvm
performance events traced by 'perf' command? Some of events are hard to
guess what they are standing for. And also is there any correlation or exact
mapping between the output of 'kvm_stat' and
On Mon, 2009-07-06 at 14:53 +0300, Dor Laor wrote:
> On 07/06/2009 12:34 PM, Martin Petermann wrote:
> > I'm currently looking at the network performance between two KVM guests
> > running on the same host. The host system is applied with two quad core
> > Xeons each 3GHz and 32G memory. 2G memory
On 07/06/2009 12:34 PM, Martin Petermann wrote:
I'm currently looking at the network performance between two KVM guests
running on the same host. The host system is applied with two quad core
Xeons each 3GHz and 32G memory. 2G memory is assigned to the guests,
enough that swap is not used. I'm us
I'm currently looking at the network performance between two KVM guests
running on the same host. The host system is applied with two quad core
Xeons each 3GHz and 32G memory. 2G memory is assigned to the guests,
enough that swap is not used. I'm using RHEL 5.3 (2.6.18-128.1.10.el5)
on all the thre
Andrew Theurer wrote:
If the overhead is dominated by copying, then you won't see the
difference. Once the copying is eliminated, the comparison may yield
different results. We should certainly see a difference in context
switches.
I would like to test this the proper way. What do I need t
Here are the SMT off results. This workload is designed to not
over-saturate the CPU, so you have to pick a number of server sets to
ensure that. With SMT on, 4 sets was enough for KVM, but 5 was too much
(start seeing response time errors). For SMT off, I tried to size the
load as high as w
On Thu, Apr 30, 2009 at 11:56:14AM +0300, Avi Kivity wrote:
> Andrew Theurer wrote:
>> Comparing guest time to all other busy time, that's a 23.88/43.02 = 55%
>> overhead for virtualization. I certainly don't expect it to be 0, but
>> 55% seems a bit high. So, what's the reason for this overhead?
Avi Kivity wrote:
Anthony Liguori wrote:
Previously, the block API only exposed non-vector interfaces and
bounced vectored operations to a linear buffer. That's been
eliminated now though so we need to update the linux-aio patch to
implement a vectored backend interface.
However, it is an
Anthony Liguori wrote:
Previously, the block API only exposed non-vector interfaces and
bounced vectored operations to a linear buffer. That's been
eliminated now though so we need to update the linux-aio patch to
implement a vectored backend interface.
However, it is an apples to apples c
Andrew Theurer wrote:
Avi Kivity wrote:
Anthony Liguori wrote:
Avi Kivity wrote:
1) I'm seeing about 2.3% in scheduler functions [that I recognize].
Does that seems a bit excessive?
Yes, it is. If there is a lot of I/O, this might be due to the
thread pool used for I/O.
This is why I wr
Avi Kivity wrote:
Anthony Liguori wrote:
Avi Kivity wrote:
1) I'm seeing about 2.3% in scheduler functions [that I recognize].
Does that seems a bit excessive?
Yes, it is. If there is a lot of I/O, this might be due to the
thread pool used for I/O.
This is why I wrote the linux-aio patch
Avi Kivity wrote:
Anthony Liguori wrote:
2) cpu_physical_memory_rw due to not using preadv/pwritev?
I think both virtio-net and virtio-blk use memcpy().
With latest linux-2.6, and a development snapshot of glibc,
virtio-blk will not use memcpy() anymore but virtio-net still does on
the r
Avi Kivity wrote:
Anthony Liguori wrote:
Avi Kivity wrote:
1) I'm seeing about 2.3% in scheduler functions [that I recognize].
Does that seems a bit excessive?
Yes, it is. If there is a lot of I/O, this might be due to the
thread pool used for I/O.
This is why I wrote the linux-aio patch
Anthony Liguori wrote:
Avi Kivity wrote:
1) I'm seeing about 2.3% in scheduler functions [that I recognize].
Does that seems a bit excessive?
Yes, it is. If there is a lot of I/O, this might be due to the
thread pool used for I/O.
This is why I wrote the linux-aio patch. It only reduced
Anthony Liguori wrote:
2) cpu_physical_memory_rw due to not using preadv/pwritev?
I think both virtio-net and virtio-blk use memcpy().
With latest linux-2.6, and a development snapshot of glibc, virtio-blk
will not use memcpy() anymore but virtio-net still does on the receive
path (but no
Andrew Theurer wrote:
disk:read: 17 MB/sec write: 40 MB/sec
This could definitely cause the extra load, especially if it's many
small requests (compared to a few large ones).
I don't have the request sizes at my fingertips, but we have to use a
lot of disks to support this I/O, so I
Andrew Theurer wrote:
Really, I think linux-aio support can help here.
Yes, I think that would work for real block devices, but would that
help for files? I am using real block devices right now, but it would
be nice to also see a benefit for files in a file-system. Or maybe I
am mis-unders
Avi Kivity wrote:
Andrew Theurer wrote:
Avi Kivity wrote:
What's the typical I/O load (disk and network bandwidth) while the
tests are running?
This is average thrgoughput:
network:Tx: 79 MB/sec Rx: 5 MB/sec
MB as in Byte or Mb as in bit?
Byte. There are 4 x 1 Gb adapters, each han
Avi Kivity wrote:
1) I'm seeing about 2.3% in scheduler functions [that I recognize].
Does that seems a bit excessive?
Yes, it is. If there is a lot of I/O, this might be due to the thread
pool used for I/O.
This is why I wrote the linux-aio patch. It only reduced CPU
consumption by abou
Andrew Theurer wrote:
Avi Kivity wrote:
What's the typical I/O load (disk and network bandwidth) while the
tests are running?
This is average thrgoughput:
network:Tx: 79 MB/sec Rx: 5 MB/sec
MB as in Byte or Mb as in bit?
disk:read: 17 MB/sec write: 40 MB/sec
This could defin
Avi Kivity wrote:
Andrew Theurer wrote:
I wanted to share some performance data for KVM and Xen. I thought it
would be interesting to share some performance results especially
compared to Xen, using a more complex situation like heterogeneous
server consolidation.
The Workload:
The workload is
Andrew Theurer wrote:
I wanted to share some performance data for KVM and Xen. I thought it
would be interesting to share some performance results especially
compared to Xen, using a more complex situation like heterogeneous
server consolidation.
The Workload:
The workload is one that simulates
Nakajima, Jun wrote:
On 4/29/2009 7:41:50 AM, Andrew Theurer wrote:
I wanted to share some performance data for KVM and Xen. I thought it
would be interesting to share some performance results especially
compared to Xen, using a more complex situation like heterogeneous
server consolidation.
On 4/29/2009 7:41:50 AM, Andrew Theurer wrote:
> I wanted to share some performance data for KVM and Xen. I thought it
> would be interesting to share some performance results especially
> compared to Xen, using a more complex situation like heterogeneous
> server consolidation.
>
> The Workload:
I wanted to share some performance data for KVM and Xen. I thought it
would be interesting to share some performance results especially
compared to Xen, using a more complex situation like heterogeneous
server consolidation.
The Workload:
The workload is one that simulates a consolidation of ser
BRAUN, Stefanie wrote:
Hello,
I've compiled a new kernel v2.6.27-rc5 with the modified svm.c.
But the behaviour of the vlc process in the guest is still the same.
I've exported additional cpu features to the guest, e.g. mmxext with kvm-84.
But no performance changes.
I was not able to export
ameters.
Oh, maybe kvm-84 doesn't have this support? try
http://userweb.kernel.org/~avi/kvm-85rc6/.
-Ursprüngliche Nachricht-
Von: Avi Kivity [mailto:a...@redhat.com]
Gesendet: Dienstag, 14. April 2009 10:48
An: BRAUN, Stefanie
Cc: kvm@vger.kernel.org
Betreff: Re: AW: AW: AW: AW
BRAUN, Stefanie wrote:
Hello,
the host runs on a Dual-Core AMD Opteron Processor.
Does there exist a similar AMD parameter?
You can add individual host cpu features by using '-cpu
qemu64,+feature', where feature is taken from the host /proc/cpuinfo.
Do you know which cpu features the prog
: AW: KVM performance
BRAUN, Stefanie wrote:
> Hello,
>
> now I was able to start the guest vmu with disk virtio, and some of
> the tests with disk involvement even improved a bit.
> But the test in which a logo is added to the video stream does not
> improve. I don't know
BRAUN, Stefanie wrote:
Hello,
now I was able to start the guest vmu with disk virtio, and some of the
tests with disk involvement even improved a bit.
But the test in which a logo is added to the video stream does not
improve. I don't know why the performance is so bad?
Subtest: Reading video
Hello,
now I was able to start the guest vmu with disk virtio, and some of the
tests with disk involvement even improved a bit.
But the test in which a logo is added to the video stream does not
improve. I don't know why the performance is so bad?
Subtest: Reading video locally, adding a logo to
BRAUN, Stefanie wrote:
> 1. Subtest: VLC reads video from local disk and streams it via udp to
another pc
> Host performance: 11% 11%
> kvm process in host (top):22% 22%
> vlc process in vmu (top): 15%
BRAUN, Stefanie wrote:
1. Subtest: VLC reads video from local disk and streams it via udp to another pc
Host performance: 11% 11%
kvm process in host (top): 22% 22%
vlc process in vmu (top): 15% 7
ess in vmu (top) :
33,8%
-Ursprüngliche Nachricht-
Von: Avi Kivity [mailto:a...@redhat.com]
Gesendet: Montag, 6. April 2009 18:36
An: BRAUN, Stefanie
Betreff: Re: AW: KVM performance
BRAUN, Stefanie wrote:
> Is this a tcp test?
>
> Can you t
April 2009 14:13
An: kvm@vger.kernel.org
Cc: BRAUN, Stefanie
Betreff: Re: KVM performance
On Friday 03 April 2009 13:32:50 you wrote:
> Hallo,
>
> as I want to switch from XEN to KVM I've made some performance tests
> to see if KVM is as peformant as XEN. But tests with a VMU that
-Ursprüngliche Nachricht-
Von: Hauke Hoffmann [mailto:kont...@hauke-hoffmann.net]
Gesendet: Montag, 6. April 2009 14:13
An: kvm@vger.kernel.org
Cc: BRAUN, Stefanie
Betreff: Re: KVM performance
On Friday 03 April 2009 13:32:50 you wrote:
> Hallo,
>
> as I want to switch from X
-Ursprüngliche Nachricht-
Von: BRAUN, Stefanie
Gesendet: Montag, 6. April 2009 18:25
An: 'Avi Kivity'
Betreff: AW: KVM performance
-Ursprüngliche Nachricht-
Von: Avi Kivity [mailto:a...@redhat.com]
Gesendet: Montag, 6. April 2009 13:45
An: BRAUN, Stefan
lient
> have shown that XEN performs much betten than KVM.
> In XEN the vlc (videolan client used to receive, process and send the
> video) process
> within the vmu has a cpuload of 33,8 % whereas in KVM
> the vlc process has a cpuload of 99.9 %.
> I'am not sure why, does
than KVM.
In XEN the vlc (videolan client used to receive, process and send the
video) process
within the vmu has a cpuload of 33,8 % whereas in KVM
the vlc process has a cpuload of 99.9 %.
I'am not sure why, does anybody now some settings to improve
the KVM performance?
Is this a tcp
e vlc (videolan client used to receive, process and send the
video) process
within the vmu has a cpuload of 33,8 % whereas in KVM
the vlc process has a cpuload of 99.9 %.
I'am not sure why, does anybody now some settings to improve
the KVM performance?
Thank you.
Regards, Stefanie.
Used ha
Randy Broman wrote:
After I submitted the initial question, I downloaded the latest kernel
2.6.27.6, and compiled
with the following options, some of which are new since my previous
kernel 2.6.24-21.
CONFIG_PARAVIRT_GUEST=y
CONFIG_XEN_SAVE_RESTORE=y
CONFIG_VMI=y
CONFIG_KVM_CLOCK=y
CONFIG_KVM_G
Don't use kvm in the tarball. It's not what you want. That's just a wrapper
that calls qemu/kvm (possibly even the system one) after it mangles some
command line options. Use qemu/x86_64-softmmu/qemu-system-x86_64 from the
tarball if you aren't going to install it. Then you just use the same com
After I submitted the initial question, I downloaded the latest kernel
2.6.27.6, and compiled
with the following options, some of which are new since my previous
kernel 2.6.24-21.
CONFIG_PARAVIRT_GUEST=y
CONFIG_XEN_SAVE_RESTORE=y
CONFIG_VMI=y
CONFIG_KVM_CLOCK=y
CONFIG_KVM_GUEST=y
# CONFIG_LGUES
Randy Broman wrote:
-I've tried both the default Cirrus adapter and the "-std-vga" option.
Which is better?
Cirrus is generally better, but supports fewer resolutions.
I saw reference to another VMware-based adapter, but I can't figure
out how to implement
it - would that be better?
-vg
See if boosting the priority of the VM (see man chrt), and locking it to
a core (see man taskset) helps. You'll want to do that for the vcpu
thread(s) (in the qmeu monitor, run 'info cpus' command).
david
Randy Broman wrote:
> I am using Intel Core2 Duo E6600, Kubuntu 8.04 with kernel
> 2.6.24-2
I am using Intel Core2 Duo E6600, Kubuntu 8.04 with kernel
2.6.24-21-generic,
kvm (as in "QEMU PC emulator version 0.9.1 (kvm-62)") and a WinXP SP3
guest,
with bridged networking. My start command is:
sudo kvm -m 1024 -cdrom /dev/cdrom -boot c -net
nic,macaddr=00:d0:13:b0:2d:32,
model=rtl8139
64 matches
Mail list logo