On Mon, Sep 10, 2012 at 07:50:13AM +0200, Paolo Bonzini wrote:
Il 09/09/2012 00:22, Michael S. Tsirkin ha scritto:
Almost. One is the guest, if really needed, can tell the host of
pages. If not negotiated, and the host does not support it, the host
must break the guest (e.g. fail to offer
On Mon, Sep 10, 2012 at 11:42:25AM +0930, Rusty Russell wrote:
OK, I read the spec (pasted below for easy of reading), but I'm still
confused over how this will work.
I thought normal net drivers have the hardware provide an rxhash for
each packet, and we map that to CPU to queue the packet
Il 09/09/2012 00:40, Michael S. Tsirkin ha scritto:
On Fri, Sep 07, 2012 at 06:00:50PM +0200, Paolo Bonzini wrote:
Il 07/09/2012 08:48, Nicholas A. Bellinger ha scritto:
Cc: Stefan Hajnoczi stefa...@linux.vnet.ibm.com
Cc: Zhi Yong Wu wu...@linux.vnet.ibm.com
Cc: Michael S. Tsirkin
On Mon, Sep 10, 2012 at 08:16:54AM +0200, Paolo Bonzini wrote:
Il 09/09/2012 00:40, Michael S. Tsirkin ha scritto:
On Fri, Sep 07, 2012 at 06:00:50PM +0200, Paolo Bonzini wrote:
Il 07/09/2012 08:48, Nicholas A. Bellinger ha scritto:
Cc: Stefan Hajnoczi stefa...@linux.vnet.ibm.com
Cc: Zhi
On Mon, Sep 10, 2012 at 09:16:29AM +0300, Michael S. Tsirkin wrote:
On Mon, Sep 10, 2012 at 11:42:25AM +0930, Rusty Russell wrote:
OK, I read the spec (pasted below for easy of reading), but I'm still
confused over how this will work.
I thought normal net drivers have the hardware
On Mon, Sep 10, 2012 at 09:27:38AM +0300, Michael S. Tsirkin wrote:
On Mon, Sep 10, 2012 at 09:16:29AM +0300, Michael S. Tsirkin wrote:
On Mon, Sep 10, 2012 at 11:42:25AM +0930, Rusty Russell wrote:
OK, I read the spec (pasted below for easy of reading), but I'm still
confused over how
Il 10/09/2012 08:03, Michael S. Tsirkin ha scritto:
On Mon, Sep 10, 2012 at 07:50:13AM +0200, Paolo Bonzini wrote:
Il 09/09/2012 00:22, Michael S. Tsirkin ha scritto:
Almost. One is the guest, if really needed, can tell the host of
pages. If not negotiated, and the host does not support it,
On Mon, Sep 10, 2012 at 08:38:09AM +0200, Paolo Bonzini wrote:
Il 10/09/2012 08:03, Michael S. Tsirkin ha scritto:
On Mon, Sep 10, 2012 at 07:50:13AM +0200, Paolo Bonzini wrote:
Il 09/09/2012 00:22, Michael S. Tsirkin ha scritto:
Almost. One is the guest, if really needed, can tell the
On Mon, Sep 10, 2012 at 08:42:15AM +0200, Paolo Bonzini wrote:
Il 10/09/2012 08:24, Michael S. Tsirkin ha scritto:
I chose the backend name because, ideally, there would be no other
difference. QEMU _could_ implement all the goodies in vhost-scsi (such
as reservations or ALUA), it just
Il 10/09/2012 08:47, Michael S. Tsirkin ha scritto:
On Mon, Sep 10, 2012 at 08:38:09AM +0200, Paolo Bonzini wrote:
Il 10/09/2012 08:03, Michael S. Tsirkin ha scritto:
On Mon, Sep 10, 2012 at 07:50:13AM +0200, Paolo Bonzini wrote:
Il 09/09/2012 00:22, Michael S. Tsirkin ha scritto:
Almost.
On 09/10/2012 06:29 AM, Hao, Xudong wrote:
Doesn't help. We can have:
host: deactivate fpu for some reason
guest: set cr4.osxsave, xcr0.bit3
host: enter guest with deactivated fpu
guest: touch fpu
result: host fpu corrupted.
Avi, I'm not sure if I fully understand of you. Do you
On 09/09/2012 06:10 PM, Liu, Jinsong wrote:
Avi Kivity wrote:
On 09/09/2012 05:54 PM, Liu, Jinsong wrote:
hrtimers is an intrusive feature, I don't think we should
force-enable it. Please change it to a depends on.
Hmm, if it changed as
config KVM
depends on HIGH_RES_TIMERS
The
On 09/10/2012 04:26 AM, Asias He wrote:
Or you can
make the guest talk to an internal unix-domain socket, tunnel that
through virtio-serial, terminate virtio-serial in lkvm, and direct it
towards the local X socket.
Doesn't this require some user agent or config modification to the guest?
On 09/07/2012 09:13 AM, Xiao Guangrong wrote:
We can not directly call kvm_release_pfn_clean to release the pfn
since we can meet noslot pfn which is used to cache mmio info into
spte
Introduce mmu_release_pfn_clean to do this kind of thing
Signed-off-by: Xiao Guangrong
On 09/07/2012 09:15 AM, Xiao Guangrong wrote:
Checking the return of kvm_mmu_get_page is unnecessary since it is
guaranteed by memory cache
Thanks, applied.
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line unsubscribe kvm in
On 09/07/2012 09:14 AM, Xiao Guangrong wrote:
This bug was triggered:
[ 4220.198458] BUG: unable to handle kernel paging request at fffe
[ 4220.203907] IP: [81104d85] put_page+0xf/0x34
..
[ 4220.237326] Call Trace:
[ 4220.237361] [a03830d0]
On 09/10/2012 04:22 PM, Avi Kivity wrote:
On 09/07/2012 09:13 AM, Xiao Guangrong wrote:
We can not directly call kvm_release_pfn_clean to release the pfn
since we can meet noslot pfn which is used to cache mmio info into
spte
Introduce mmu_release_pfn_clean to do this kind of thing
On 09/10/2012 11:37 AM, Xiao Guangrong wrote:
On 09/10/2012 04:22 PM, Avi Kivity wrote:
On 09/07/2012 09:13 AM, Xiao Guangrong wrote:
We can not directly call kvm_release_pfn_clean to release the pfn
since we can meet noslot pfn which is used to cache mmio info into
spte
Introduce
On Tue, Jun 26, 2012 at 01:32:58PM -0700, Frank Swiderski wrote:
This implementation of a virtio balloon driver uses the page cache to
store pages that have been released to the host. The communication
(outside of target counts) is one way--the guest notifies the host when
it adds a page to
On 09/07/2012 09:16 AM, Xiao Guangrong wrote:
mmu_notifier is the interface to broadcast the mm events to KVM, the
tracepoints introduced in this patch can trace all these events, it is
very helpful for us to notice and fix the bug caused by mm
There is nothing kvm specific here. Perhaps this
On 09/10/2012 05:02 PM, Avi Kivity wrote:
On 09/10/2012 11:37 AM, Xiao Guangrong wrote:
On 09/10/2012 04:22 PM, Avi Kivity wrote:
On 09/07/2012 09:13 AM, Xiao Guangrong wrote:
We can not directly call kvm_release_pfn_clean to release the pfn
since we can meet noslot pfn which is used to cache
On 08/16/2012 04:57 PM, Avi Kivity wrote:
Hi Avi,
No, there was no reason and we disabled it there too. Interestingly, the
buffer
size did not go down significantly, even when manually flushing the pages
using /proc/sys/vm/drop_caches (3), the buffer size did not go down.
Finally,
after
On 2012-09-09 17:45, Avi Kivity wrote:
On 09/07/2012 11:50 AM, Jan Kiszka wrote:
+} else {
+cpu_physical_memory_rw(run-mmio.phys_addr,
+ run-mmio.data,
+ run-mmio.len,
+
On 09/10/2012 05:09 PM, Avi Kivity wrote:
On 09/07/2012 09:16 AM, Xiao Guangrong wrote:
mmu_notifier is the interface to broadcast the mm events to KVM, the
tracepoints introduced in this patch can trace all these events, it is
very helpful for us to notice and fix the bug caused by mm
On 2012-09-09 16:13, Avi Kivity wrote:
On 09/06/2012 11:44 AM, Jan Kiszka wrote:
On 2012-08-30 20:30, Jan Kiszka wrote:
This adds PCI device assignment for i386 targets using the classic KVM
interfaces. This version is 100% identical to what is being maintained
in qemu-kvm for several years
On 2012-09-09 16:01, Avi Kivity wrote:
On 08/20/2012 11:55 AM, Jan Kiszka wrote:
No need to expose the fd-based interface, everyone will already be fine
with the more handy EventNotifier variant. Rename the latter to clarify
that we are still talking about irqfds here.
Thanks, applied.
qemu-kvm-1.1.2 is now available. This release is based on the upstream
qemu 1.1.2, plus kvm-specific enhancements. Please see the
original QEMU 1.1.2 release announcement [1] for details.
This release can be used with the kvm kernel modules provided by your
distribution kernel, or by the modules
qemu-kvm-1.2.0 is now available. This release is based on the upstream
qemu 1.2.0, plus kvm-specific enhancements. Please see the
original QEMU 1.2.0 release announcement [1] for details.
This release can be used with the kvm kernel modules provided by your
distribution kernel, or by the modules
On 2012-09-10 11:53, Avi Kivity wrote:
qemu-kvm-1.2.0 is now available. This release is based on the upstream
qemu 1.2.0, plus kvm-specific enhancements. Please see the
original QEMU 1.2.0 release announcement [1] for details.
To be more precise about the kvm-specific enhancements: The only
Hi,
I'm recently debugging a qemu-kvm issue. I add some print code like
'fprintf(stderr, ...)', however I fail to see any info at stdio. Anyone can
tell me where is qemu-kvm logfile, or, what I need do to record my fprintf info?
Thanks,
Jinsong--
To unsubscribe from this list: send the line
On 09/10/2012 02:33 PM, Michael S. Tsirkin wrote:
On Mon, Sep 10, 2012 at 09:27:38AM +0300, Michael S. Tsirkin wrote:
On Mon, Sep 10, 2012 at 09:16:29AM +0300, Michael S. Tsirkin wrote:
On Mon, Sep 10, 2012 at 11:42:25AM +0930, Rusty Russell wrote:
OK, I read the spec (pasted below for easy
On 09/06/12 16:58, Avi Kivity wrote:
On 08/22/2012 06:06 PM, Peter Lieven wrote:
Hi,
has anyone ever tested to run memtest with -cpu host flag passed to
qemu-kvm?
For me it resets when probing the chipset. With -cpu qemu64 it works
just fine.
Maybe this is specific to memtest, but it might be
Il 10/09/2012 13:06, Peter Lieven ha scritto:
qemu-kvm-1.0.1-5107 [007] 410771.148000: kvm_entry: vcpu 0
qemu-kvm-1.0.1-5107 [007] 410771.148000: kvm_exit: reason MSR_READ rip
0x11478 info 0 0
qemu-kvm-1.0.1-5107 [007] 410771.148000: kvm_msr: msr_read 194 = 0x0 (#GP)
qemu-kvm-1.0.1-5107
Hi Jan,
On 2012/09/07 17:26, Jan Kiszka wrote:
On 2012-09-06 13:27, Tomoki Sekiyama wrote:
This RFC patch series provides facility to dedicate CPUs to KVM guests
and enable the guests to handle interrupts from passed-through PCI devices
directly (without VM exit and relay by the host).
On 09/10/12 13:29, Paolo Bonzini wrote:
Il 10/09/2012 13:06, Peter Lieven ha scritto:
qemu-kvm-1.0.1-5107 [007] 410771.148000: kvm_entry: vcpu 0
qemu-kvm-1.0.1-5107 [007] 410771.148000: kvm_exit: reason MSR_READ rip
0x11478 info 0 0
qemu-kvm-1.0.1-5107 [007] 410771.148000: kvm_msr: msr_read 194
On 09/10/12 13:29, Paolo Bonzini wrote:
Il 10/09/2012 13:06, Peter Lieven ha scritto:
qemu-kvm-1.0.1-5107 [007] 410771.148000: kvm_entry: vcpu 0
qemu-kvm-1.0.1-5107 [007] 410771.148000: kvm_exit: reason MSR_READ rip
0x11478 info 0 0
qemu-kvm-1.0.1-5107 [007] 410771.148000: kvm_msr: msr_read 194
On 09/10/2012 02:29 PM, Paolo Bonzini wrote:
Il 10/09/2012 13:06, Peter Lieven ha scritto:
qemu-kvm-1.0.1-5107 [007] 410771.148000: kvm_entry: vcpu 0
qemu-kvm-1.0.1-5107 [007] 410771.148000: kvm_exit: reason MSR_READ rip
0x11478 info 0 0
qemu-kvm-1.0.1-5107 [007] 410771.148000: kvm_msr:
Il 10/09/2012 13:52, Peter Lieven ha scritto:
dd if=/dev/cpu/0/msr skip=$((0x194)) bs=8 count=1 | xxd
dd if=/dev/cpu/0/msr skip=$((0xCE)) bs=8 count=1 | xxd
it only works without the skip. but the msr device returns all zeroes.
Hmm, the strange API of the MSR device doesn't work well with dd
On Mon, Sep 10, 2012 at 02:15:49PM +0200, Paolo Bonzini wrote:
Il 10/09/2012 13:52, Peter Lieven ha scritto:
dd if=/dev/cpu/0/msr skip=$((0x194)) bs=8 count=1 | xxd
dd if=/dev/cpu/0/msr skip=$((0xCE)) bs=8 count=1 | xxd
it only works without the skip. but the msr device returns all zeroes.
On 09/10/12 14:21, Gleb Natapov wrote:
On Mon, Sep 10, 2012 at 02:15:49PM +0200, Paolo Bonzini wrote:
Il 10/09/2012 13:52, Peter Lieven ha scritto:
dd if=/dev/cpu/0/msr skip=$((0x194)) bs=8 count=1 | xxd
dd if=/dev/cpu/0/msr skip=$((0xCE)) bs=8 count=1 | xxd
it only works without the skip.
On 09/10/2012 12:26 PM, Jan Kiszka wrote:
Is patch 4 the only one that is at v3, and the rest are to be taken from
the original posting?
That is correct.
Thanks, applied to uq/master, will push shortly.
--
error compiling committee.c: too many arguments to function
--
To unsubscribe
Hi
Please send in any agenda items you are interested in covering.
Thanks, Juan.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 09/10/2012 03:29 PM, Peter Lieven wrote:
On 09/10/12 14:21, Gleb Natapov wrote:
On Mon, Sep 10, 2012 at 02:15:49PM +0200, Paolo Bonzini wrote:
Il 10/09/2012 13:52, Peter Lieven ha scritto:
dd if=/dev/cpu/0/msr skip=$((0x194)) bs=8 count=1 | xxd
dd if=/dev/cpu/0/msr skip=$((0xCE)) bs=8
On 09/10/12 14:32, Avi Kivity wrote:
On 09/10/2012 03:29 PM, Peter Lieven wrote:
On 09/10/12 14:21, Gleb Natapov wrote:
On Mon, Sep 10, 2012 at 02:15:49PM +0200, Paolo Bonzini wrote:
Il 10/09/2012 13:52, Peter Lieven ha scritto:
dd if=/dev/cpu/0/msr skip=$((0x194)) bs=8 count=1 | xxd
dd
Most interrupt are delivered to only one vcpu. Use pre-build tables to
find interrupt destination instead of looping through all vcpus.
Signed-off-by: Gleb Natapov g...@redhat.com
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 64adb61..121f308 100644
---
On Sat, 2012-09-08 at 14:13 +0530, Srikar Dronamraju wrote:
signed-off-by: Andrew Theurer haban...@linux.vnet.ibm.com
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index fbf1fd0..c767915 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4844,6 +4844,9 @@
On 09/10/2012 01:44 PM, Liu, Jinsong wrote:
Hi,
I'm recently debugging a qemu-kvm issue. I add some print code like
'fprintf(stderr, ...)', however I fail to see any info at stdio. Anyone can
tell me where is qemu-kvm logfile, or, what I need do to record my fprintf
info?
If you're
Avi Kivity wrote:
On 09/10/2012 01:44 PM, Liu, Jinsong wrote:
Hi,
I'm recently debugging a qemu-kvm issue. I add some print code like
'fprintf(stderr, ...)', however I fail to see any info at stdio.
Anyone can tell me where is qemu-kvm logfile, or, what I need do to
record my fprintf info?
On Mon, Sep 10, 2012 at 04:09:15PM +0300, Gleb Natapov wrote:
Most interrupt are delivered to only one vcpu. Use pre-build tables to
find interrupt destination instead of looping through all vcpus.
Signed-off-by: Gleb Natapov g...@redhat.com
Looks good overall. I think I see some bugs, with
On 09/08/2012 01:12 AM, Andrew Theurer wrote:
On Fri, 2012-09-07 at 23:36 +0530, Raghavendra K T wrote:
CCing PeterZ also.
On 09/07/2012 06:41 PM, Andrew Theurer wrote:
I have noticed recently that PLE/yield_to() is still not that scalable
for really large guests, sometimes even with no CPU
On 09/10/2012 04:09 PM, Gleb Natapov wrote:
Most interrupt are delivered to only one vcpu. Use pre-build tables to
find interrupt destination instead of looping through all vcpus.
+
+static inline int recalculate_apic_map(struct kvm *kvm)
+{
+ struct kvm_apic_map *new, *old = NULL;
+
Please pull from:
git://git.kernel.org/pub/scm/virt/kvm/qemu-kvm.git uq/master
to merge some kvm updates, most notably a port of qemu-kvm's pre-vfio device
assignment. With this there are no significant changes left between qemu and
qemu-kvm (though some work remains).
On 2012-09-10 17:25, Avi Kivity wrote:
Please pull from:
git://git.kernel.org/pub/scm/virt/kvm/qemu-kvm.git uq/master
to merge some kvm updates, most notably a port of qemu-kvm's pre-vfio device
assignment. With this there are no significant changes left between qemu and
qemu-kvm
Am 05.09.2012 22:46, schrieb Anthony Liguori:
What do we do when the FSF comes out with the GPLv4 and relicenses again
in an incompatible fashion? Do we do this exercise every couple of
years?
That's exactly why I suggested GPLv2+ because it was supposed to be a
preparation for the future.
Linus, please pull from the repo and branch at:
git://git.kernel.org/pub/scm/virt/kvm/kvm.git tags/kvm-3.6-2
to receive a trio of KVM fixes: incorrect lookup of guest cpuid,
an uninitialized variable fix, and error path cleanup fix.
Shortlog/diffstat follow.
On Mon, Sep 10, 2012 at 06:09:01PM +0300, Avi Kivity wrote:
On 09/10/2012 04:09 PM, Gleb Natapov wrote:
Most interrupt are delivered to only one vcpu. Use pre-build tables to
find interrupt destination instead of looping through all vcpus.
+
+static inline int
On Mon, Sep 10, 2012 at 05:32:39PM +0200, Jan Kiszka wrote:
On 2012-09-10 17:25, Avi Kivity wrote:
Please pull from:
git://git.kernel.org/pub/scm/virt/kvm/qemu-kvm.git uq/master
to merge some kvm updates, most notably a port of qemu-kvm's pre-vfio device
assignment. With this
Il 06/09/2012 07:02, Michael S. Tsirkin ha scritto:
It might be worth just unconditionally having a cache for the 2
descriptor case. This is what I get with qemu tap, though for some
reason the device features don't have guest or host CSUM, so my setup is
probably screwed:
Yes without
On 09/10/2012 06:49 PM, Marcelo Tosatti wrote:
On Mon, Sep 10, 2012 at 05:32:39PM +0200, Jan Kiszka wrote:
On 2012-09-10 17:25, Avi Kivity wrote:
Please pull from:
git://git.kernel.org/pub/scm/virt/kvm/qemu-kvm.git uq/master
to merge some kvm updates, most notably a port of
On Mon, 2012-09-10 at 08:16 -0500, Andrew Theurer wrote:
@@ -4856,8 +4859,6 @@ again:
if (curr-sched_class != p-sched_class)
goto out;
- if (task_running(p_rq, p) || p-state)
- goto out;
Is it possible that by this time the current thread takes
On Mon, Sep 10, 2012 at 10:47:15AM -0500, Thomas Lendacky wrote:
On Friday, September 07, 2012 09:19:04 AM Rusty Russell wrote:
Michael S. Tsirkin m...@redhat.com writes:
On Thu, Sep 06, 2012 at 05:27:23PM +0930, Rusty Russell wrote:
Michael S. Tsirkin m...@redhat.com writes:
On Friday, September 07, 2012 09:19:04 AM Rusty Russell wrote:
Michael S. Tsirkin m...@redhat.com writes:
On Thu, Sep 06, 2012 at 05:27:23PM +0930, Rusty Russell wrote:
Michael S. Tsirkin m...@redhat.com writes:
Yes without checksum net core always linearizes packets, so yes it is
On Mon, Sep 10, 2012 at 05:44:38PM +0300, Michael S. Tsirkin wrote:
On Mon, Sep 10, 2012 at 04:09:15PM +0300, Gleb Natapov wrote:
Most interrupt are delivered to only one vcpu. Use pre-build tables to
find interrupt destination instead of looping through all vcpus.
Signed-off-by: Gleb
* Peter Zijlstra pet...@infradead.org [2012-09-10 18:03:55]:
On Mon, 2012-09-10 at 08:16 -0500, Andrew Theurer wrote:
@@ -4856,8 +4859,6 @@ again:
if (curr-sched_class != p-sched_class)
goto out;
- if (task_running(p_rq, p) || p-state)
- goto
On Mon, Sep 10, 2012 at 07:17:54PM +0300, Gleb Natapov wrote:
+ return 0;
+}
+
+static inline int kvm_apic_set_id(struct kvm_lapic *apic, u8 id)
+{
+ apic_set_reg(apic, APIC_ID, id 24);
+ return recalculate_apic_map(apic-vcpu-kvm);
+}
+
+static inline int
On Mon, 2012-09-10 at 22:26 +0530, Srikar Dronamraju wrote:
+static bool __yield_to_candidate(struct task_struct *curr, struct
task_struct *p)
+{
+ if (!curr-sched_class-yield_to_task)
+ return false;
+
+ if (curr-sched_class != p-sched_class)
+
Hi,
i try to run virt-manager on a SLES 11 SP1 box. I'm using kernel 2.6.32.12 and
virt-manager 0.9.4-106.1.x86_64 .
The system is a 64bit box.
Here is the output:
=
pc56846:/media/idg2/SysAdmin_AG_Wurst/software_und_treiber/virt_manager/sles_11_sp1
# virt-manager
[1]
On Mon, Sep 03, 2012 at 06:35:24PM +0100, Michael Johns wrote:
Hi list,
I have been hacking the KVM-QEMU code, but need some help to be able
to perform a particular operation.
Currently, I perform some operations on the VM image after it has
received a shutdown call, and after the image
On Mon, Sep 10, 2012 at 5:05 AM, Michael S. Tsirkin m...@redhat.com wrote:
On Tue, Jun 26, 2012 at 01:32:58PM -0700, Frank Swiderski wrote:
This implementation of a virtio balloon driver uses the page cache to
store pages that have been released to the host. The communication
(outside of
On 09/10/2012 07:51 AM, Alfred Bratterud wrote:
For a research project we are trying to boot a very large amount of tiny,
custom built VM's on KVM/ubuntu. The maximum VM-count achieved was 1000, but
with substantial slowness, and eventually kernel failure, while the
cpu/memory loads were
On 09/10/2012 01:37 PM, Mike Waychison wrote:
On Mon, Sep 10, 2012 at 5:05 AM, Michael S. Tsirkin m...@redhat.com wrote:
Also can you pls answer Avi's question?
How is overcommit managed?
Overcommit in our deployments is managed using memory cgroups on the
host. This allows us to have very
On Mon, Sep 10, 2012 at 2:04 PM, Rik van Riel r...@redhat.com wrote:
On 09/10/2012 01:37 PM, Mike Waychison wrote:
On Mon, Sep 10, 2012 at 5:05 AM, Michael S. Tsirkin m...@redhat.com
wrote:
Also can you pls answer Avi's question?
How is overcommit managed?
Overcommit in our deployments
On 09/09/2012 07:12 PM, Rusty Russell wrote:
OK, I read the spec (pasted below for easy of reading), but I'm still
confused over how this will work.
I thought normal net drivers have the hardware provide an rxhash for
each packet, and we map that to CPU to queue the packet on[1]. We hope
that
Hi everybody,
I got a server with CentOS 6.3 and KVM as a host and a windows 2k8
guest.
The windows machine's disk performance is very poor.
The windows guest uses VirtIO disk drivers, no cache and uses a LVM
partition on a Raid1.
atop shows 100% disk utilization as soon as the windows guest
On 09/10/2012 10:42 PM, Peter Zijlstra wrote:
On Mon, 2012-09-10 at 22:26 +0530, Srikar Dronamraju wrote:
+static bool __yield_to_candidate(struct task_struct *curr, struct task_struct
*p)
+{
+ if (!curr-sched_class-yield_to_task)
+ return false;
+
+ if (curr-sched_class !=
On Mon, Sep 10, 2012 at 01:37:06PM -0400, Mike Waychison wrote:
On Mon, Sep 10, 2012 at 5:05 AM, Michael S. Tsirkin m...@redhat.com wrote:
On Tue, Jun 26, 2012 at 01:32:58PM -0700, Frank Swiderski wrote:
This implementation of a virtio balloon driver uses the page cache to
store pages that
On Mon, 2012-09-10 at 19:12 +0200, Peter Zijlstra wrote:
On Mon, 2012-09-10 at 22:26 +0530, Srikar Dronamraju wrote:
+static bool __yield_to_candidate(struct task_struct *curr, struct
task_struct *p)
+{
+ if (!curr-sched_class-yield_to_task)
+ return false;
+
On Mon, 2012-09-10 at 15:12 -0500, Andrew Theurer wrote:
+ /*
+* if the target task is not running, then only yield if the
+* current task is in guest mode
+*/
+ if (!(p_rq-curr-flags PF_VCPU))
+ goto out_irq;
This would make yield_to()
On 09/10/2012 04:19 PM, Peter Zijlstra wrote:
On Mon, 2012-09-10 at 15:12 -0500, Andrew Theurer wrote:
+ /*
+* if the target task is not running, then only yield if the
+* current task is in guest mode
+*/
+ if (!(p_rq-curr-flags PF_VCPU))
+
On Mon, Sep 10, 2012 at 3:59 PM, Michael S. Tsirkin m...@redhat.com wrote:
On Mon, Sep 10, 2012 at 01:37:06PM -0400, Mike Waychison wrote:
On Mon, Sep 10, 2012 at 5:05 AM, Michael S. Tsirkin m...@redhat.com wrote:
On Tue, Jun 26, 2012 at 01:32:58PM -0700, Frank Swiderski wrote:
This
On Mon, Sep 10, 2012 at 04:49:40PM -0400, Mike Waychison wrote:
On Mon, Sep 10, 2012 at 3:59 PM, Michael S. Tsirkin m...@redhat.com wrote:
On Mon, Sep 10, 2012 at 01:37:06PM -0400, Mike Waychison wrote:
On Mon, Sep 10, 2012 at 5:05 AM, Michael S. Tsirkin m...@redhat.com
wrote:
On Tue,
On Fri, Sep 07, 2012 at 05:56:39PM +0800, Xiao Guangrong wrote:
On 09/06/2012 10:09 PM, Avi Kivity wrote:
On 08/22/2012 03:47 PM, Xiao Guangrong wrote:
On 08/22/2012 08:06 PM, Avi Kivity wrote:
On 08/21/2012 06:03 AM, Xiao Guangrong wrote:
Introduce write_readonly_mem in mmio-exit-info to
On Sun, 9 Sep 2012, Matthew Ogilvie wrote:
This bug manifested itself when the guest was Microport UNIX
System V/386 v2.1 (ca. 1987), because it would sometimes mask
off IRQ14 in the slave IMR after it had already been asserted.
The master would still try to deliver an interrupt even though
2012/9/11 Lentes, Bernd bernd.len...@helmholtz-muenchen.de
Hi,
i try to run virt-manager on a SLES 11 SP1 box. I'm using kernel 2.6.32.12
and virt-manager 0.9.4-106.1.x86_64 .
The system is a 64bit box.
Here is the output:
=
On Mon, Sep 10, 2012 at 11:25:38AM +0200, Jan Kiszka wrote:
On 2012-09-09 17:45, Avi Kivity wrote:
On 09/07/2012 11:50 AM, Jan Kiszka wrote:
+} else {
+cpu_physical_memory_rw(run-mmio.phys_addr,
+ run-mmio.data,
+
Hello Nicholas,
On 09/07/2012 02:48 PM, Nicholas A. Bellinger wrote:
From: Nicholas Bellinger n...@linux-iscsi.org
Hello Anthony Co,
This is the fourth installment to add host virtualized target support for
the mainline tcm_vhost fabric driver using Linux v3.6-rc into QEMU 1.3.0-rc.
On Tue, Sep 11, 2012 at 01:49:51AM +0100, Maciej W. Rozycki wrote:
On Sun, 9 Sep 2012, Matthew Ogilvie wrote:
This bug manifested itself when the guest was Microport UNIX
System V/386 v2.1 (ca. 1987), because it would sometimes mask
off IRQ14 in the slave IMR after it had already been
87 matches
Mail list logo