On Wed, Apr 22, 2009 at 10:57:13PM +0800, alex wrote:
the debug tools I used.
---
diff --git a/arch/x86/kvm/trace.c b/arch/x86/kvm/trace.c
new file mode 100644
index 000..5718e94
--- /dev/null
+++
Jorge Lucángeli Obes wrote:
...
Aidan, were you able to solve this? I was having the same (original)
problem in Xubuntu 64-bits with a custom 2.6.31 kernel and kvm-88. I
still haven't tried Jan's patch (paper deadline at work) but I wanted
to know if you had made any progress.
The kvm-kmod
2009/9/24 Jan Kiszka jan.kis...@web.de:
Jorge Lucángeli Obes wrote:
...
Aidan, were you able to solve this? I was having the same (original)
problem in Xubuntu 64-bits with a custom 2.6.31 kernel and kvm-88. I
still haven't tried Jan's patch (paper deadline at work) but I wanted
to know if
On 09/24/2009 12:15 AM, Gregory Haskins wrote:
There are various aspects about designing high-performance virtual
devices such as providing the shortest paths possible between the
physical resources and the consumers. Conversely, we also need to
ensure that we meet proper isolation/protection
On 09/24/2009 09:42 AM, Jan Kiszka wrote:
Jorge Lucángeli Obes wrote:
...
Aidan, were you able to solve this? I was having the same (original)
problem in Xubuntu 64-bits with a custom 2.6.31 kernel and kvm-88. I
still haven't tried Jan's patch (paper deadline at work) but I wanted
to know
On 09/23/2009 06:45 PM, Jan Kiszka wrote:
Functions calling each other in the same subsystem can rely on callers
calling cpu_synchronize_state(). Across subsystems, that's another
matter, exported functions should try not to rely on implementation
details of their callers.
(You might argue
On Thu, Sep 24, 2009 at 10:53:59AM +0300, Avi Kivity wrote:
On 09/23/2009 06:45 PM, Jan Kiszka wrote:
Functions calling each other in the same subsystem can rely on callers
calling cpu_synchronize_state(). Across subsystems, that's another
matter, exported functions should try not to rely on
On 09/23/2009 10:37 PM, Avi Kivity wrote:
Example: feature negotiation. If it happens in userspace, it's easy
to limit what features we expose to the guest. If it happens in the
kernel, we need to add an interface to let the kernel know which
features it should expose to the guest. We
Gleb Natapov wrote:
On Thu, Sep 24, 2009 at 10:53:59AM +0300, Avi Kivity wrote:
On 09/23/2009 06:45 PM, Jan Kiszka wrote:
Functions calling each other in the same subsystem can rely on callers
calling cpu_synchronize_state(). Across subsystems, that's another
matter, exported functions
On 09/24/2009 11:03 AM, Gleb Natapov wrote:
The new rule is: Synchronize the states before accessing registers (or
in-kernel devices) the first time after a vmexit to user space.
No, the rule is: synchronize state before accessing registers.
Extra synchronization is cheap, while
On Thu, Sep 24, 2009 at 10:15:15AM +0200, Jan Kiszka wrote:
Gleb Natapov wrote:
On Thu, Sep 24, 2009 at 10:53:59AM +0300, Avi Kivity wrote:
On 09/23/2009 06:45 PM, Jan Kiszka wrote:
Functions calling each other in the same subsystem can rely on callers
calling cpu_synchronize_state().
Gleb Natapov wrote:
On Thu, Sep 24, 2009 at 10:15:15AM +0200, Jan Kiszka wrote:
Gleb Natapov wrote:
On Thu, Sep 24, 2009 at 10:53:59AM +0300, Avi Kivity wrote:
On 09/23/2009 06:45 PM, Jan Kiszka wrote:
Functions calling each other in the same subsystem can rely on callers
calling
On Thu, Sep 24, 2009 at 10:59:46AM +0200, Jan Kiszka wrote:
Gleb Natapov wrote:
On Thu, Sep 24, 2009 at 10:15:15AM +0200, Jan Kiszka wrote:
Gleb Natapov wrote:
On Thu, Sep 24, 2009 at 10:53:59AM +0300, Avi Kivity wrote:
On 09/23/2009 06:45 PM, Jan Kiszka wrote:
Functions calling each
Bugs item #2826486, was opened at 2009-07-24 11:16
Message generated for change (Comment added) made by aurel32
You can respond by visiting:
https://sourceforge.net/tracker/?func=detailatid=893831aid=2826486group_id=180599
Please note that this message will contain a full copy of the comment
On 09/23/2009 06:58 PM, Matthew Tippett wrote:
Hi,
I would like to call attention to the SQLite performance under KVM in
the current Ubuntu Alpha.
http://www.phoronix.com/scan.php?page=articleitem=linux_2631_kvmnum=3
SQLite's benchmark as part of the Phoronix Test Suite is typically IO
On 09/24/2009 03:01 AM, Matt Piermarini wrote:
If anybody has any ideas I can try, I'd surely appreciate it. My host
does NOT have vt-d capable hardware, and I'm not even sure that is
requirement - is it? Host is an Intel ICH10/P45/Q6600.
Flags: bus master, medium devsel, latency 64,
Thanks Avi,
I am still trying to reconcile the your statement with the potential
data risks and the numbers observed.
My read of your response is that the guest sees a consistent view -
the data is commited to the virtual disk device. Does a synchronous
write within the guest trigger a
On 09/24/2009 03:31 PM, Matthew Tippett wrote:
Thanks Avi,
I am still trying to reconcile the your statement with the potential
data risks and the numbers observed.
My read of your response is that the guest sees a consistent view -
the data is commited to the virtual disk device. Does a
Hello All,
I am happy to announce that the Windows guest drivers binaries are
released.
http://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers
Best regards,
Yan Vugenfirer.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to
On 09/24/2009 07:46 AM, Avi Kivity wrote:
On 09/24/2009 03:01 AM, Matt Piermarini wrote:
If anybody has any ideas I can try, I'd surely appreciate it. My
host does NOT have vt-d capable hardware, and I'm not even sure that
is requirement - is it? Host is an Intel ICH10/P45/Q6600.
On Wed, Sep 23, 2009 at 09:47:18PM +0300, Izik Eidus wrote:
this is needed for kvm if it want ksm to directly map pages into its
shadow page tables.
Signed-off-by: Izik Eidus iei...@redhat.com
---
arch/x86/include/asm/kvm_host.h |1 +
arch/x86/kvm/mmu.c | 62
On Wed, Sep 23, 2009 at 05:29:01PM -1000, Zachary Amsden wrote:
They are globals, not clearly protected by any ordering or locking, and
vulnerable to various startup races.
Instead, for variable TSC machines, register the cpufreq notifier and get
the TSC frequency directly from the cpufreq
Bugs item #2865820, was opened at 2009-09-24 10:44
Message generated for change (Tracker Item Submitted) made by jimerickson
You can respond by visiting:
https://sourceforge.net/tracker/?func=detailatid=893831aid=2865820group_id=180599
Please note that this message will contain a full copy of
On Wed, Sep 23, 2009 at 05:29:02PM -1000, Zachary Amsden wrote:
Both VMX and SVM require per-cpu memory allocation, which is done at module
init time, for only online cpus. When bringing a new CPU online, we must
also allocate this structure. The method chosen to implement this is to
make
My boss asked me to install and configure a streaming server for live videos.
My choice for the server is red5, an open source streaming server.
Do you think can I use a kvm virtual machine for this server or it's
better not to use virtualization?
My hardware is a HP Proliant DL580 G5 with 4 Intel
On Wed, Sep 23, 2009 at 09:47:17PM +0300, Izik Eidus wrote:
this flag notify that the host physical page we are pointing to from
the spte is write protected, and therefore we cant change its access
to be write unless we run get_user_pages(write = 1).
(this is needed for change_pte support in
On Wed, Sep 23, 2009 at 09:47:16PM +0300, Izik Eidus wrote:
When using mmu notifiers, we are allowed to remove the page count
reference tooken by get_user_pages to a specific page that is mapped
inside the shadow page tables.
This is needed so we can balance the pagecount against mapcount
On Thu, Sep 24, 2009 at 11:06:51AM -0300, Marcelo Tosatti wrote:
On Mon, Sep 21, 2009 at 08:37:18PM -0300, Marcelo Tosatti wrote:
Use two steps for memslot deletion: mark the slot invalid (which stops
instantiation of new shadow pages for that slot, but allows destruction),
then
Avi Kivity wrote:
On 09/24/2009 12:15 AM, Gregory Haskins wrote:
There are various aspects about designing high-performance virtual
devices such as providing the shortest paths possible between the
physical resources and the consumers. Conversely, we also need to
ensure that we meet proper
Avi Kivity wrote:
On 09/23/2009 10:37 PM, Avi Kivity wrote:
Example: feature negotiation. If it happens in userspace, it's easy
to limit what features we expose to the guest. If it happens in the
kernel, we need to add an interface to let the kernel know which
features it should expose to
On Thu, Sep 24, 2009 at 10:18:28AM +0300, Avi Kivity wrote:
On 09/24/2009 12:15 AM, Gregory Haskins wrote:
There are various aspects about designing high-performance virtual
devices such as providing the shortest paths possible between the
physical resources and the consumers.
The test itself is a simple usage of SQLite. It is stock KVM as
available in 2.6.31 on Ubuntu Karmic. So it would be the environment,
not the test.
So assuming that KVM upstream works as expected that would leave
either 2.6.31 having an issue, or Ubuntu having an issue.
Care to make an
On 09/24/2009 05:52 AM, Marcelo Tosatti wrote:
+static __cpuinit int vmx_cpu_hotadd(int cpu)
+{
+ struct vmcs *vmcs;
+
+ if (per_cpu(vmxarea, cpu))
+ return 0;
+
+ vmcs = alloc_vmcs_cpu(cpu);
+ if (!vmcs)
+ return -ENOMEM;
+
+
2009/9/24 Yan Vugenfirer yvuge...@redhat.com:
Hello All,
I am happy to announce that the Windows guest drivers binaries are
released.
Thank you, I've been waiting for this for quite a while :)
I've done some benchmarking with the drivers on Windows XP SP3 32bit,
but it seems like using the
On Thu, Sep 24, 2009 at 3:38 PM, Kenni Lund ke...@kelu.dk wrote:
I've done some benchmarking with the drivers on Windows XP SP3 32bit,
but it seems like using the VirtIO drivers are slower than the IDE drivers in
(almost) all cases. Perhaps I've missed something or does the driver still
need
On 09/24/2009 11:59 PM, Javier Guerra wrote:
On Thu, Sep 24, 2009 at 3:38 PM, Kenni Lundke...@kelu.dk wrote:
I've done some benchmarking with the drivers on Windows XP SP3 32bit,
but it seems like using the VirtIO drivers are slower than the IDE drivers in
(almost) all cases. Perhaps I've
For a heavily I/O-bound load such as media streaming, it's better not to
use virtualization. There are some newer technologies such as SR-IOV
which may mitigate these problems, but I don't particularly suggest
straying that close to the bleeding edge on a presumably
mission-critical system.
Simplified the patch series a bit and fixed some bugs noticed by Marcelo.
Axed the hot-remove notifier (was not needed), fixed a locking bug by
using cpufreq_quick_get, fixed another bug in kvm_cpu_hotplug that was
filtering out online notifications when KVM was loaded but not in use.
--
To
Signed-off-by: Zachary Amsden zams...@redhat.com
---
arch/x86/kvm/x86.c | 23 +++
1 files changed, 15 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index fedac9d..15d2ace 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@
They are globals, not clearly protected by any ordering or locking, and
vulnerable to various startup races.
Instead, for variable TSC machines, register the cpufreq notifier and get
the TSC frequency directly from the cpufreq machinery. Not only is it
always right, it is also perfectly
In the process of bringing down CPUs, the SVM / VMX structures associated
with those CPUs are not freed. This may cause leaks when unloading and
reloading the KVM module, as only the structures associated with online
CPUs are cleaned up. So, clean up all possible CPUs, not just online ones.
CPU frequency change callback provides new TSC frequency for us, and in the
same units (kHz), so there is no reason to do any math.
Signed-off-by: Zachary Amsden zams...@redhat.com
---
arch/x86/kvm/x86.c |5 +
1 files changed, 1 insertions(+), 4 deletions(-)
diff --git
Both VMX and SVM require per-cpu memory allocation, which is done at module
init time, for only online cpus. When bringing a new CPU online, we must
also allocate this structure. The method chosen to implement this is to
make the CPU online notifier available via a call to the arch code. This
Avi,
hrtimer is used for sleep in attached patch, which have similar perf
gain with previous one. Maybe we can check in this patch first, and turn
to direct yield in future, as you suggested.
Thanks,
edwin
Avi Kivity wrote:
On 09/23/2009 05:04 PM, Zhai, Edwin wrote:
Avi,
This is the
The Phoronix Test Suite is designed to test a (client) operating
system out of the box and it does a good job at that.
It's certainly valid to run PTS inside a virtual machine but you
you're going to need to tune the host, in this case Karmic.
The way you'd configure a client operating system to
Thanks for your response.
Remember that I am not raising questions about the relative performance
of KVM using guests. The prevailing opinion would be that performance
of a guest would range anywhere from considerably slower to around the
same performance as native - depending on workload,
* Made the build of older KVM trees possible
* Now the test handles loading extra modules, improved module loading code
* Other small cleanups
Signed-off-by: Lucas Meneghel Rodrigues l...@redhat.com
---
client/tests/kvm/tests/build.py | 125 ++
1 files
repository: /home/vadimr/shares/kvm-guest-drivers-windows
branch: master
commit c507d6b279010ff1e1939927d2b2e91a59daac3b
Author: Vadim Rozenfeld vroze...@redhat.com
Date: Thu Sep 24 22:03:00 2009 +0300
[PATCH] viostor driver. Complete SRBs at DPC level
Signed-off-by: Vadim
48 matches
Mail list logo