On Sun, Jan 30, 2011, Avi Kivity wrote about Re: [PATCH 05/29] nVMX: Implement
reading and writing of VMX MSRs:
+case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
+case MSR_IA32_VMX_PINBASED_CTLS:
+vmx_msr_low = CORE2_PINBASED_CTLS_MUST_BE_ONE;
+vmx_msr_high =
On 01/31/2011 10:57 AM, Nadav Har'El wrote:
On Sun, Jan 30, 2011, Avi Kivity wrote about Re: [PATCH 05/29] nVMX: Implement
reading and writing of VMX MSRs:
+ case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
+ case MSR_IA32_VMX_PINBASED_CTLS:
+ vmx_msr_low =
* KAMEZAWA Hiroyuki kamezawa.hir...@jp.fujitsu.com [2011-01-31 08:58:53]:
On Fri, 28 Jan 2011 09:20:02 -0600 (CST)
Christoph Lameter c...@linux.com wrote:
On Fri, 28 Jan 2011, KAMEZAWA Hiroyuki wrote:
I see it as a tradeoff of when to check? add_to_page_cache or when we
are
Hi,
On Sun, Jan 30, 2011, Avi Kivity wrote about Re: [PATCH 07/29] nVMX: Hold a
vmcs02 for each vmcs12:
+/*
+ * Allocate an L0 VMCS (vmcs02) for the current L1 VMCS (vmcs12), if one
+ * does not already exist. The allocation is done in L0 memory, so to
avoid
+ * denial-of-service attack by
On 01/31/2011 11:26 AM, Nadav Har'El wrote:
Hi,
On Sun, Jan 30, 2011, Avi Kivity wrote about Re: [PATCH 07/29] nVMX: Hold a vmcs02
for each vmcs12:
+/*
+ * Allocate an L0 VMCS (vmcs02) for the current L1 VMCS (vmcs12), if one
+ * does not already exist. The allocation is done in L0
On 01/27/2011 03:09 PM, Jan Kiszka wrote:
If we call qemu_cpu_kick more than once before the target was able to
process the signal, pthread_kill will fail, and qemu will abort. Prevent
this by avoiding the redundant signal.
Doesn't fit with the manual page (or with the idea that signals are
On 01/27/2011 03:09 PM, Jan Kiszka wrote:
If there is any pending request that requires us to leave the inner loop
if main_loop, makes sure we do this as soon as possible by enforcing
non-blocking IO processing.
At this change, move variable definitions out of the inner loop to
improve
On 01/27/2011 04:33 PM, Jan Kiszka wrote:
Found by Stefan Hajnoczi: There is a race in kvm_cpu_exec between
checking for exit_request on vcpu entry and timer signals arriving
before KVM starts to catch them. Plug it by blocking both timer related
signals also on !CONFIG_IOTHREAD and process
Please send in any agenda items you are interested incovering.
Thanks, Juan.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Fri, Jan 28, 2011, Juerg Haefliger wrote about Re: [PATCH 0/29] nVMX:
Nested VMX, v8:
This branch doesn't even compile:
...
CC [M] drivers/staging/smbfs/dir.o
drivers/staging/smbfs/dir.c:286: error: static declaration of
I tried to compile this branch with the default .config (answering
On 01/27/2011 03:10 PM, Jan Kiszka wrote:
Align with qemu-kvm and prepare for IO exit fix: There is no need to run
kvm_arch_process_irqchip_events in the inner VCPU loop. Any state change
this service processes will first cause an exit from kvm_cpu_exec
anyway. And we will have to reenter the
On 01/27/2011 03:09 PM, Jan Kiszka wrote:
This second round of patches focus on issues in cpus.c, primarily signal
related. The highlights are
- Add missing KVM_RUN continuation after I/O exits
- Fix for timer signal race in KVM entry code under !CONFIG_IOTHREAD
(based on Stefan's
On Fri, 2011-01-28 at 14:52 -0500, Glauber Costa wrote:
+ u64 to = (get_kernel_ns() - vcpu-arch.this_time_out);
+ /*
+* using nanoseconds introduces noise, which accumulates
easily
+* leading to big steal time values. We want,
On Fri, 2011-01-28 at 14:52 -0500, Glauber Costa wrote:
+ /*
+* using nanoseconds introduces noise, which accumulates easily
+* leading to big steal time values. We want, however, to keep the
+* interface nanosecond-based for future-proofness. The hypervisor may
On 2011-01-31 10:44, Avi Kivity wrote:
On 01/27/2011 03:09 PM, Jan Kiszka wrote:
If we call qemu_cpu_kick more than once before the target was able to
process the signal, pthread_kill will fail, and qemu will abort. Prevent
this by avoiding the redundant signal.
Doesn't fit with the manual
On 2011-01-31 10:52, Avi Kivity wrote:
On 01/27/2011 03:09 PM, Jan Kiszka wrote:
If there is any pending request that requires us to leave the inner loop
if main_loop, makes sure we do this as soon as possible by enforcing
non-blocking IO processing.
At this change, move variable definitions
On Fri, 2011-01-28 at 14:52 -0500, Glauber Costa wrote:
+#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
+static DEFINE_PER_CPU(u64, cpu_steal_time);
+
+#ifndef CONFIG_64BIT
+static DEFINE_PER_CPU(seqcount_t, steal_time_seq);
+
+static inline void steal_time_write_begin(void)
+{
+
On Mon, 2011-01-31 at 12:25 +0100, Peter Zijlstra wrote:
On Fri, 2011-01-28 at 14:52 -0500, Glauber Costa wrote:
+#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
+static DEFINE_PER_CPU(u64, cpu_steal_time);
+
+#ifndef CONFIG_64BIT
+static DEFINE_PER_CPU(seqcount_t, steal_time_seq);
+
On 2011-01-31 11:03, Avi Kivity wrote:
On 01/27/2011 04:33 PM, Jan Kiszka wrote:
Found by Stefan Hajnoczi: There is a race in kvm_cpu_exec between
checking for exit_request on vcpu entry and timer signals arriving
before KVM starts to catch them. Plug it by blocking both timer related
signals
On 2011-01-31 11:08, Avi Kivity wrote:
On 01/27/2011 03:10 PM, Jan Kiszka wrote:
Align with qemu-kvm and prepare for IO exit fix: There is no need to run
kvm_arch_process_irqchip_events in the inner VCPU loop. Any state change
this service processes will first cause an exit from kvm_cpu_exec
On Sun, 30 Jan 2011 16:06:20 +0100
Alexander Graf ag...@suse.de wrote:
On 28.01.2011, at 21:10, Luiz Capitulino wrote:
Hi there,
GSoC 2011 has been announced[1]. As we were pretty successful last year,
I think we should participate again. I've already created a wiki page:
On Wed, 2011-01-26 at 17:21 -0500, Rik van Riel wrote:
+static struct sched_entity *__pick_second_entity(struct cfs_rq *cfs_rq)
+{
+ struct rb_node *left = cfs_rq-rb_leftmost;
+ struct rb_node *second;
+
+ if (!left)
+ return NULL;
+
+ second = rb_next(left);
On Wed, 2011-01-26 at 17:21 -0500, Rik van Riel wrote:
+bool __sched yield_to(struct task_struct *p, bool preempt)
+{
+ struct task_struct *curr = current;
+ struct rq *rq, *p_rq;
+ unsigned long flags;
+ bool yielded = 0;
+
+ local_irq_save(flags);
+
On Wed, 2011-01-26 at 17:23 -0500, Rik van Riel wrote:
Export the symbols required for a race-free kvm_vcpu_on_spin.
Avi, you asked for an example of why I hated KVM as a module :-)
Signed-off-by: Rik van Riel r...@redhat.com
diff --git a/kernel/fork.c b/kernel/fork.c
index
On 2011-01-31 11:12, Avi Kivity wrote:
On 01/27/2011 03:09 PM, Jan Kiszka wrote:
This second round of patches focus on issues in cpus.c, primarily signal
related. The highlights are
- Add missing KVM_RUN continuation after I/O exits
- Fix for timer signal race in KVM entry code under
On Mon, Jan 31, 2011 at 11:27 AM, Jan Kiszka jan.kis...@siemens.com wrote:
On 2011-01-31 11:03, Avi Kivity wrote:
On 01/27/2011 04:33 PM, Jan Kiszka wrote:
Found by Stefan Hajnoczi: There is a race in kvm_cpu_exec between
checking for exit_request on vcpu entry and timer signals arriving
On 2011-01-31 13:13, Stefan Hajnoczi wrote:
On Mon, Jan 31, 2011 at 11:27 AM, Jan Kiszka jan.kis...@siemens.com wrote:
On 2011-01-31 11:03, Avi Kivity wrote:
On 01/27/2011 04:33 PM, Jan Kiszka wrote:
Found by Stefan Hajnoczi: There is a race in kvm_cpu_exec between
checking for exit_request
On 2011-01-31 12:36, Jan Kiszka wrote:
On 2011-01-31 11:08, Avi Kivity wrote:
On 01/27/2011 03:10 PM, Jan Kiszka wrote:
Align with qemu-kvm and prepare for IO exit fix: There is no need to run
kvm_arch_process_irqchip_events in the inner VCPU loop. Any state change
this service processes will
On 01/30/2011 06:38 AM, Sheng Yang wrote:
(Sorry, missed this mail...)
On Mon, Jan 17, 2011 at 02:29:44PM +0200, Avi Kivity wrote:
On 01/06/2011 12:19 PM, Sheng Yang wrote:
Then we can support mask bit operation of assigned devices now.
+int
On 01/31/2011 01:19 PM, Jan Kiszka wrote:
On 2011-01-31 10:44, Avi Kivity wrote:
On 01/27/2011 03:09 PM, Jan Kiszka wrote:
If we call qemu_cpu_kick more than once before the target was able to
process the signal, pthread_kill will fail, and qemu will abort. Prevent
this by avoiding the
On 01/31/2011 01:22 PM, Jan Kiszka wrote:
On 2011-01-31 10:52, Avi Kivity wrote:
On 01/27/2011 03:09 PM, Jan Kiszka wrote:
If there is any pending request that requires us to leave the inner loop
if main_loop, makes sure we do this as soon as possible by enforcing
non-blocking IO
On 01/31/2011 01:27 PM, Jan Kiszka wrote:
On 2011-01-31 11:03, Avi Kivity wrote:
On 01/27/2011 04:33 PM, Jan Kiszka wrote:
Found by Stefan Hajnoczi: There is a race in kvm_cpu_exec between
checking for exit_request on vcpu entry and timer signals arriving
before KVM starts to catch them.
On 01/26/2011 11:05 AM, Sheng Yang wrote:
On Tuesday 25 January 2011 20:47:38 Avi Kivity wrote:
On 01/19/2011 10:21 AM, Sheng Yang wrote:
We already got an guest MMIO address for that in the exit
information. I've created a chain of handler in qemu to handle it.
On 01/31/2011 01:51 PM, Peter Zijlstra wrote:
On Wed, 2011-01-26 at 17:23 -0500, Rik van Riel wrote:
Export the symbols required for a race-free kvm_vcpu_on_spin.
Avi, you asked for an example of why I hated KVM as a module :-)
Why do you dislike exports so much?
--
error compiling
On Mon, Jan 31, 2011 at 12:18 PM, Jan Kiszka jan.kis...@siemens.com wrote:
On 2011-01-31 13:13, Stefan Hajnoczi wrote:
On Mon, Jan 31, 2011 at 11:27 AM, Jan Kiszka jan.kis...@siemens.com wrote:
On 2011-01-31 11:03, Avi Kivity wrote:
On 01/27/2011 04:33 PM, Jan Kiszka wrote:
Found by Stefan
On Mon, 2011-01-31 at 15:26 +0200, Avi Kivity wrote:
On 01/31/2011 01:51 PM, Peter Zijlstra wrote:
On Wed, 2011-01-26 at 17:23 -0500, Rik van Riel wrote:
Export the symbols required for a race-free kvm_vcpu_on_spin.
Avi, you asked for an example of why I hated KVM as a module :-)
Why
On 01/31/2011 03:43 PM, Peter Zijlstra wrote:
On Mon, 2011-01-31 at 15:26 +0200, Avi Kivity wrote:
On 01/31/2011 01:51 PM, Peter Zijlstra wrote:
On Wed, 2011-01-26 at 17:23 -0500, Rik van Riel wrote:
Export the symbols required for a race-free kvm_vcpu_on_spin.
Avi, you asked
On Tue, Jan 25, 2011 at 07:36:02PM +0200, Avi Kivity wrote:
On 01/25/2011 07:12 PM, Marcelo Tosatti wrote:
Should be done by a call to kvm_mmu_page_set_gfn(). But I don't
understand how it could become inconsistent in the first place.
if (is_rmap_spte(*sptep)) {
/*
On 2011-01-31 14:22, Avi Kivity wrote:
On 01/31/2011 01:27 PM, Jan Kiszka wrote:
On 2011-01-31 11:03, Avi Kivity wrote:
On 01/27/2011 04:33 PM, Jan Kiszka wrote:
Found by Stefan Hajnoczi: There is a race in kvm_cpu_exec between
checking for exit_request on vcpu entry and timer signals
On 2011-01-31 14:17, Avi Kivity wrote:
On 01/31/2011 01:22 PM, Jan Kiszka wrote:
On 2011-01-31 10:52, Avi Kivity wrote:
On 01/27/2011 03:09 PM, Jan Kiszka wrote:
If there is any pending request that requires us to leave the inner loop
if main_loop, makes sure we do this as soon as possible
On 01/31/2011 06:47 AM, Peter Zijlstra wrote:
On Wed, 2011-01-26 at 17:21 -0500, Rik van Riel wrote:
+static struct sched_entity *__pick_second_entity(struct cfs_rq *cfs_rq)
+{
+ struct rb_node *left = cfs_rq-rb_leftmost;
+ struct rb_node *second;
+
+ if (!left)
+
On 2011-01-31 14:04, Jan Kiszka wrote:
On 2011-01-31 12:36, Jan Kiszka wrote:
On 2011-01-31 11:08, Avi Kivity wrote:
On 01/27/2011 03:10 PM, Jan Kiszka wrote:
Align with qemu-kvm and prepare for IO exit fix: There is no need to run
kvm_arch_process_irqchip_events in the inner VCPU loop. Any
On 01/31/2011 04:31 PM, Jan Kiszka wrote:
And how would you be kicked out of the select() call if it is waiting
with a timeout? We only have a single thread here.
If we use signalfd() (either kernel provided or thread+pipe), we kick
out of select by select()ing it (though I don't see
On Mon, Jan 31, 2011 at 04:40:34PM +0100, Jan Kiszka wrote:
On 2011-01-31 14:04, Jan Kiszka wrote:
On 2011-01-31 12:36, Jan Kiszka wrote:
On 2011-01-31 11:08, Avi Kivity wrote:
On 01/27/2011 03:10 PM, Jan Kiszka wrote:
Align with qemu-kvm and prepare for IO exit fix: There is no need to
On 2011-01-31 17:38, Gleb Natapov wrote:
On Mon, Jan 31, 2011 at 04:40:34PM +0100, Jan Kiszka wrote:
On 2011-01-31 14:04, Jan Kiszka wrote:
On 2011-01-31 12:36, Jan Kiszka wrote:
On 2011-01-31 11:08, Avi Kivity wrote:
On 01/27/2011 03:10 PM, Jan Kiszka wrote:
Align with qemu-kvm and prepare
On 2011-01-31 17:41, Jan Kiszka wrote:
On 2011-01-31 17:38, Gleb Natapov wrote:
On Mon, Jan 31, 2011 at 04:40:34PM +0100, Jan Kiszka wrote:
On 2011-01-31 14:04, Jan Kiszka wrote:
On 2011-01-31 12:36, Jan Kiszka wrote:
On 2011-01-31 11:08, Avi Kivity wrote:
On 01/27/2011 03:10 PM, Jan Kiszka
On Mon, Jan 31, 2011 at 05:41:24PM +0100, Jan Kiszka wrote:
On 2011-01-31 17:38, Gleb Natapov wrote:
On Mon, Jan 31, 2011 at 04:40:34PM +0100, Jan Kiszka wrote:
On 2011-01-31 14:04, Jan Kiszka wrote:
On 2011-01-31 12:36, Jan Kiszka wrote:
On 2011-01-31 11:08, Avi Kivity wrote:
On
On 2011-01-31 17:50, Gleb Natapov wrote:
On Mon, Jan 31, 2011 at 05:41:24PM +0100, Jan Kiszka wrote:
On 2011-01-31 17:38, Gleb Natapov wrote:
On Mon, Jan 31, 2011 at 04:40:34PM +0100, Jan Kiszka wrote:
On 2011-01-31 14:04, Jan Kiszka wrote:
On 2011-01-31 12:36, Jan Kiszka wrote:
On
On Mon, Jan 31, 2011 at 05:52:13PM +0100, Jan Kiszka wrote:
On 2011-01-31 17:50, Gleb Natapov wrote:
On Mon, Jan 31, 2011 at 05:41:24PM +0100, Jan Kiszka wrote:
On 2011-01-31 17:38, Gleb Natapov wrote:
On Mon, Jan 31, 2011 at 04:40:34PM +0100, Jan Kiszka wrote:
On 2011-01-31 14:04, Jan
On 2011-01-31 11:02, Juan Quintela wrote:
Please send in any agenda items you are interested incovering.
o KVM upstream merge: status, plans, coordination
Jan
--
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux
--
To unsubscribe from this list:
On 01/31/2011 06:49 AM, Peter Zijlstra wrote:
On Wed, 2011-01-26 at 17:21 -0500, Rik van Riel wrote:
+ if (yielded)
+ yield();
+
+ return yielded;
+}
+EXPORT_SYMBOL_GPL(yield_to);
yield() will again acquire rq-lock.. not not simply have
-yield_to_task() do
On Mon, Jan 24, 2011 at 10:37:56PM -0700, Alex Williamson wrote:
On Mon, 2011-01-24 at 08:44 -0700, Alex Williamson wrote:
I'll look at how we might be
able to allocate slots on demand. Thanks,
Here's a first cut just to see if this looks agreeable. This allows the
slot array to grow on
On Fri, Jan 21, 2011 at 12:21:00AM -0500, john cooper wrote:
[Resubmit of prior version which contained a wayward
patch hunk. Thanks Marcelo]
A correction to Intel cpu model CPUID data (patch queued)
caused winxp to BSOD when booted with a Penryn model.
This was traced to the CPUID model
When MSI is off, each interrupt needs to be bounced through the io
thread when it's set/cleared, so vhost-net causes more context switches and
higher CPU utilization than userspace virtio which handles networking in
the same thread.
We'll need to fix this by adding level irq support in kvm irqfd,
On 01/31/2011 03:19 PM, Michael S. Tsirkin wrote:
When MSI is off, each interrupt needs to be bounced through the io
thread when it's set/cleared, so vhost-net causes more context switches and
higher CPU utilization than userspace virtio which handles networking in
the same thread.
We'll need
On 01/31/2011 12:10 PM, Jan Kiszka wrote:
On 2011-01-31 11:02, Juan Quintela wrote:
Please send in any agenda items you are interested incovering.
o KVM upstream merge: status, plans, coordination
o QMP support status for 0.14. Luiz and I already chatted about it
today
On Mon, 2011-01-31 at 23:19 +0200, Michael S. Tsirkin wrote:
When MSI is off, each interrupt needs to be bounced through the io
thread when it's set/cleared, so vhost-net causes more context switches and
higher CPU utilization than userspace virtio which handles networking in
the same thread.
Use the buddy mechanism to implement yield_task_fair. This
allows us to skip onto the next highest priority se at every
level in the CFS tree, unless doing so would introduce gross
unfairness in CPU time distribution.
We order the buddy selection in pick_next_entity to check
yield first, then
The clear_buddies function does not seem to play well with the concept
of hierarchical runqueues. In the following tree, task groups are
represented by 'G', tasks by 'T', next by 'n' and last by 'l'.
(nl)
/\
G(nl) G
/ \ \
T(l) T(n) T
This situation can arise when a
From: Mike Galbraith efa...@gmx.de
Currently only implemented for fair class tasks.
Add a yield_to_task method() to the fair scheduling class. allowing the
caller of yield_to() to accelerate another thread in it's thread group,
task group.
Implemented via a scheduler hint, using cfs_rq-next to
When running SMP virtual machines, it is possible for one VCPU to be
spinning on a spinlock, while the VCPU that holds the spinlock is not
currently running, because the host scheduler preempted it to run
something else.
Both Intel and AMD CPUs have a feature that detects when a virtual
CPU is
Instead of sleeping in kvm_vcpu_on_spin, which can cause gigantic
slowdowns of certain workloads, we instead use yield_to to get
another VCPU in the same KVM guest to run sooner.
This seems to give a 10-15% speedup in certain workloads.
Signed-off-by: Rik van Riel r...@redhat.com
Signed-off-by:
Keep track of which task is running a KVM vcpu. This helps us
figure out later what task to wake up if we want to boost a
vcpu that got preempted.
Unfortunately there are no guarantees that the same task
always keeps the same vcpu, so we can only track the task
across a single run of the vcpu.
With CONFIG_FAIR_GROUP_SCHED, each task_group has its own cfs_rq.
Yielding to a task from another cfs_rq may be worthwhile, since
a process calling yield typically cannot use the CPU right now.
Therefor, we want to check the per-cpu nr_running, not the
cgroup local one.
Signed-off-by: Rik van
Export the symbols required for a race-free kvm_vcpu_on_spin.
Signed-off-by: Rik van Riel r...@redhat.com
diff --git a/kernel/fork.c b/kernel/fork.c
index 3b159c5..adc8f47 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -191,6 +191,7 @@ void __put_task_struct(struct task_struct *tsk)
On Mon, Jan 31, 2011 at 02:47:34PM -0700, Alex Williamson wrote:
On Mon, 2011-01-31 at 23:19 +0200, Michael S. Tsirkin wrote:
When MSI is off, each interrupt needs to be bounced through the io
thread when it's set/cleared, so vhost-net causes more context switches and
higher CPU utilization
On Tue, 2011-02-01 at 00:02 +0200, Michael S. Tsirkin wrote:
On Mon, Jan 31, 2011 at 02:47:34PM -0700, Alex Williamson wrote:
On Mon, 2011-01-31 at 23:19 +0200, Michael S. Tsirkin wrote:
When MSI is off, each interrupt needs to be bounced through the io
thread when it's set/cleared, so
On Mon, Jan 31, 2011 at 03:07:49PM -0700, Alex Williamson wrote:
On Tue, 2011-02-01 at 00:02 +0200, Michael S. Tsirkin wrote:
On Mon, Jan 31, 2011 at 02:47:34PM -0700, Alex Williamson wrote:
On Mon, 2011-01-31 at 23:19 +0200, Michael S. Tsirkin wrote:
When MSI is off, each interrupt
Michael S. Tsirkin m...@redhat.com wrote on 01/28/2011 06:16:16 AM:
OK, so thinking about it more, maybe the issue is this:
tx becomes full. We process one request and interrupt the guest,
then it adds one request and the queue is full again.
Maybe the following will help it stabilize?
By
On Mon, 2011-01-31 at 18:24 -0600, Steve Dobbelstein wrote:
Michael S. Tsirkin m...@redhat.com wrote on 01/28/2011 06:16:16 AM:
OK, so thinking about it more, maybe the issue is this:
tx becomes full. We process one request and interrupt the guest,
then it adds one request and the queue
On Mon, Jan 31, 2011 at 03:09:09PM +0200, Avi Kivity wrote:
On 01/30/2011 06:38 AM, Sheng Yang wrote:
(Sorry, missed this mail...)
On Mon, Jan 17, 2011 at 02:29:44PM +0200, Avi Kivity wrote:
On 01/06/2011 12:19 PM, Sheng Yang wrote:
Then we can support mask bit operation of assigned
On Mon, Jan 31, 2011 at 03:24:27PM +0200, Avi Kivity wrote:
On 01/26/2011 11:05 AM, Sheng Yang wrote:
On Tuesday 25 January 2011 20:47:38 Avi Kivity wrote:
On 01/19/2011 10:21 AM, Sheng Yang wrote:
We already got an guest MMIO address for that in the exit
information.
On Mon, Jan 31, 2011 at 06:24:34PM -0600, Steve Dobbelstein wrote:
Michael S. Tsirkin m...@redhat.com wrote on 01/28/2011 06:16:16 AM:
OK, so thinking about it more, maybe the issue is this:
tx becomes full. We process one request and interrupt the guest,
then it adds one request and the
On Mon, Jan 31, 2011 at 05:30:38PM -0800, Sridhar Samudrala wrote:
On Mon, 2011-01-31 at 18:24 -0600, Steve Dobbelstein wrote:
Michael S. Tsirkin m...@redhat.com wrote on 01/28/2011 06:16:16 AM:
OK, so thinking about it more, maybe the issue is this:
tx becomes full. We process one
74 matches
Mail list logo