cpu-index which uses hyphen is better name.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
diff --git a/hmp-commands.hx b/hmp-commands.hx
index 5d4cb9e..e43ac7c 100644
--- a/hmp-commands.hx
+++ b/hmp-commands.hx
@@ -721,7 +721,7 @@ ETEXI
#if defined(TARGET_I386)
{
.name
When the argument cpu-index is not given,
then nmi command will inject NMI on all CPUs.
This simulate the nmi button on physical machine.
Note: it will allow non-argument nmi command and
change the human monitor behavior.
Thanks to Markus Armbruster for correcting the logic
detecting
Make we can inject NMI via qemu-monitor-protocol.
We use inject-nmi for the qmp command name, the meaning is clearer.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
diff --git a/hmp-commands.hx b/hmp-commands.hx
index ec1a4db..e763bf9 100644
--- a/hmp-commands.hx
+++ b/hmp-commands.hx
@@
When cpu-index is found invalid in runtime, it will report
QERR_INVALID_PARAMETER_VALUE.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
diff --git a/monitor.c b/monitor.c
index 1b1c0ba..82935f0 100644
--- a/monitor.c
+++ b/monitor.c
@@ -2563,6 +2563,7 @@ static int do_inject_nmi(Monitor
Hi,
This is the eighth iteration of the nested VMX patch set. This iteration
solves a number of bugs and issues that bothered the reviewers. Some more
issues raised in the previous review remain open, but don't worry - I *am*
working to resolve all of them.
The biggest improvement in this
This patch adds a module option nested to vmx.c, which controls whether
the guest can use VMX instructions, i.e., whether we allow nested
virtualization. A similar, but separate, option already exists for the
SVM module.
This option currently defaults to 0, meaning that nested VMX must be
This patch allows a guest to use the VMXON and VMXOFF instructions, and
emulates them accordingly. Basically this amounts to checking some
prerequisites, and then remembering whether the guest has enabled or disabled
VMX operation.
Signed-off-by: Nadav Har'El n...@il.ibm.com
---
This patch allows the guest to enable the VMXE bit in CR4, which is a
prerequisite to running VMXON.
Whether to allow setting the VMXE bit now depends on the architecture (svm
or vmx), so its checking has moved to kvm_x86_ops-set_cr4(). This function
now returns an int: If kvm_x86_ops-set_cr4()
An implementation of VMX needs to define a VMCS structure. This structure
is kept in guest memory, but is opaque to the guest (who can only read or
write it with VMX instructions).
This patch starts to define the VMCS structure which our nested VMX
implementation will present to L1. We call it
When the guest can use VMX instructions (when the nested module option is
on), it should also be able to read and write VMX MSRs, e.g., to query about
VMX capabilities. This patch adds this support.
Signed-off-by: Nadav Har'El n...@il.ibm.com
---
arch/x86/include/asm/msr-index.h |9 ++
This patch includes a utility function for decoding pointer operands of VMX
instructions issued by L1 (a guest hypervisor)
Signed-off-by: Nadav Har'El n...@il.ibm.com
---
arch/x86/kvm/vmx.c | 59 +++
arch/x86/kvm/x86.c |3 +-
arch/x86/kvm/x86.h |
In this patch we add a list of L0 (hardware) VMCSs, which we'll use to hold a
hardware VMCS for each active vmcs12 (i.e., for each L2 guest).
We call each of these L0 VMCSs a vmcs02, as it is the VMCS that L0 uses
to run its nested guest L2.
Signed-off-by: Nadav Har'El n...@il.ibm.com
---
In VMX, before we bring down a CPU we must VMCLEAR all VMCSs loaded on it
because (at least in theory) the processor might not have written all of its
content back to memory. Since a patch from June 26, 2008, this is done using
a per-cpu vcpus_on_cpu linked list of vcpus loaded on each CPU.
The
In this patch we add to vmcs12 (the VMCS that L1 keeps for L2) all the
standard VMCS fields. These fields are encapsulated in a struct vmcs_fields.
Later patches will enable L1 to read and write these fields using VMREAD/
VMWRITE, and they will be used during a VMLAUNCH/VMRESUME in preparing
VMX instructions specify success or failure by setting certain RFLAGS bits.
This patch contains common functions to do this, and they will be used in
the following patches which emulate the various VMX instructions.
Signed-off-by: Nadav Har'El n...@il.ibm.com
---
arch/x86/include/asm/vmx.h |
This patch implements the VMCLEAR instruction.
Signed-off-by: Nadav Har'El n...@il.ibm.com
---
arch/x86/kvm/vmx.c | 63 ++-
1 file changed, 62 insertions(+), 1 deletion(-)
--- .before/arch/x86/kvm/vmx.c 2011-01-26 18:06:04.0 +0200
+++
This patch implements the VMPTRLD instruction.
Signed-off-by: Nadav Har'El n...@il.ibm.com
---
arch/x86/kvm/vmx.c | 64 ++-
1 file changed, 63 insertions(+), 1 deletion(-)
--- .before/arch/x86/kvm/vmx.c 2011-01-26 18:06:04.0 +0200
+++
This patch implements the VMPTRST instruction.
Signed-off-by: Nadav Har'El n...@il.ibm.com
---
arch/x86/kvm/vmx.c | 27 ++-
arch/x86/kvm/x86.c |3 ++-
arch/x86/kvm/x86.h |3 +++
3 files changed, 31 insertions(+), 2 deletions(-)
--- .before/arch/x86/kvm/x86.c
Implement the VMREAD and VMWRITE instructions. With these instructions, L1
can read and write to the VMCS it is holding. The values are read or written
to the fields of the vmcs_fields structure introduced in a previous patch.
Signed-off-by: Nadav Har'El n...@il.ibm.com
---
arch/x86/kvm/vmx.c |
Move code that syncs dirty RSP and RIP registers back to the VMCS, into a
function. We will need to call this function from additional places in the
next patch.
Signed-off-by: Nadav Har'El n...@il.ibm.com
---
arch/x86/kvm/vmx.c | 15 ++-
1 file changed, 10 insertions(+), 5
Implement the VMLAUNCH and VMRESUME instructions, allowing a guest
hypervisor to run its own guests.
Signed-off-by: Nadav Har'El n...@il.ibm.com
---
arch/x86/kvm/vmx.c | 205 ++-
1 file changed, 202 insertions(+), 3 deletions(-)
---
Before nested VMX support, the exit handler for a guest executing a VMX
instruction (vmclear, vmlaunch, vmptrld, vmptrst, vmread, vmread, vmresume,
vmwrite, vmon, vmoff), was handle_vmx_insn(). This handler simply threw a #UD
exception. Now that all these exit reasons are properly handled (and
This patch implements nested_vmx_vmexit(), called when the nested L2 guest
exits and we want to run its L1 parent and let it handle this exit.
Note that this will not necessarily be called on every L2 exit. L0 may decide
to handle a particular exit on its own, without L1's involvement; In that
This patch contains the logic of whether an L2 exit should be handled by L0
and then L2 should be resumed, or whether L1 should be run to handle this
exit (using the nested_vmx_vmexit() function of the previous patch).
The basic idea is to let L1 handle the exit only if it actually asked to
trap
When KVM wants to inject an interrupt, the guest should think a real interrupt
has happened. Normally (in the non-nested case) this means checking that the
guest doesn't block interrupts (and if it does, inject when it doesn't - using
the interrupt window VMX mechanism), and setting up the
Similar to the previous patch, but concerning injection of exceptions rather
than external interrupts.
Signed-off-by: Nadav Har'El n...@il.ibm.com
---
arch/x86/kvm/vmx.c | 26 ++
1 file changed, 26 insertions(+)
--- .before/arch/x86/kvm/vmx.c 2011-01-26
This patch adds correct handling of IDT_VECTORING_INFO_FIELD for the nested
case.
When a guest exits while handling an interrupt or exception, we get this
information in IDT_VECTORING_INFO_FIELD in the VMCS. When L2 exits to L1,
there's nothing we need to do, because L1 will see this field in
When L2 tries to modify CR0 or CR4 (with mov or clts), and modifies a bit
which L1 asked to shadow (via CR[04]_GUEST_HOST_MASK), we already do the right
thing: we let L1 handle the trap (see nested_vmx_exit_handled_cr() in a
previous patch).
When L2 modifies bits that L1 doesn't care about, we let
KVM's Lazy FPU loading means that sometimes L0 needs to set CR0.TS, even
if a guest didn't set it. Moreover, L0 must also trap CR0.TS changes and
NM exceptions, even if we have a guest hypervisor (L1) who didn't want these
traps. And of course, conversely: If L1 wanted to trap these events, we
In the unlikely case that L1 does not capture MSR_IA32_TSC, L0 needs to
emulate this MSR write by L2 by modifying vmcs02.tsc_offset.
We also need to set vmcs12.tsc_offset, for this change to survive the next
nested entry (see prepare_vmcs02()).
Signed-off-by: Nadav Har'El n...@il.ibm.com
---
If the nested module option is enabled, add the VMX CPU feature to the
list of CPU features KVM advertises with the KVM_GET_SUPPORTED_CPUID ioctl.
Qemu uses this ioctl, and intersects KVM's list with its own list of desired
cpu features (depending on the -cpu option given to qemu) to determine
Small corrections of KVM (spelling, etc.) not directly related to nested VMX.
Signed-off-by: Nadav Har'El n...@il.ibm.com
---
arch/x86/kvm/vmx.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- .before/arch/x86/kvm/vmx.c 2011-01-26 18:06:06.0 +0200
+++
This patch includes a brief introduction to the nested vmx feature in the
Documentation/kvm directory. The document also includes a copy of the
vmcs12 structure, as requested by Avi Kivity.
Signed-off-by: Nadav Har'El n...@il.ibm.com
---
Documentation/kvm/nested-vmx.txt | 241
On 01/26/2011 02:08 PM, Michael S. Tsirkin wrote:
I just mean that once you fault you map sptes and then you can use them
without exits. mmio will cause exits each time. Right?
The swapper scanning sptes, ksmd, khugepaged, and swapping can all
cause a page to be unmapped. Though it
On Thu, Jan 27, 2011 at 11:21:47AM +0200, Avi Kivity wrote:
On 01/26/2011 02:08 PM, Michael S. Tsirkin wrote:
I just mean that once you fault you map sptes and then you can use them
without exits. mmio will cause exits each time. Right?
The swapper scanning sptes, ksmd,
On 01/27/2011 11:26 AM, Michael S. Tsirkin wrote:
Right. That's why I say that sorting by size might not be optimal.
Maybe a cache ...
Why would it not be optimal?
If you have 16GB RAM in two slots and a few megabytes here and there
scattered in some slots, you have three orders of
On Thu, Jan 27, 2011 at 11:26:19AM +0200, Michael S. Tsirkin wrote:
On Thu, Jan 27, 2011 at 11:21:47AM +0200, Avi Kivity wrote:
On 01/26/2011 02:08 PM, Michael S. Tsirkin wrote:
I just mean that once you fault you map sptes and then you can use them
without exits. mmio will cause
On Thu, Jan 27, 2011 at 11:28:12AM +0200, Avi Kivity wrote:
On 01/27/2011 11:26 AM, Michael S. Tsirkin wrote:
Right. That's why I say that sorting by size might not be optimal.
Maybe a cache ...
Why would it not be optimal?
If you have 16GB RAM in two slots and a few megabytes
On 01/27/2011 11:29 AM, Michael S. Tsirkin wrote:
On Thu, Jan 27, 2011 at 11:28:12AM +0200, Avi Kivity wrote:
On 01/27/2011 11:26 AM, Michael S. Tsirkin wrote:
Right. That's why I say that sorting by size might not be optimal.
Maybe a cache ...
Why would it not be optimal?
On 01/26/2011 06:51 PM, Asdo wrote:
Some time ago in this list it was mentioned that old kernels pre-2.6.28
don't work well with KVM.
(in particular we have a machine with 2.6.24)
pre 2.6.27 kernels don't have mmu notifiers and thus don't handle
overcommit well. No idea if there's anything
Hi Alex,
On 26.01.2011 06:12, Alex Williamson wrote:
So while your initial results are promising, my guess is that you're
using card specific drivers and still need to consider some of the
harder problems with generic support for vga assignment. I hacked on
this for a bit trying to see if I
On 01/26/2011 05:45 PM, Glauber Costa wrote:
On Wed, 2011-01-26 at 17:17 +0200, Avi Kivity wrote:
On 01/26/2011 02:20 PM, Glauber Costa wrote:
On Wed, 2011-01-26 at 13:13 +0200, Avi Kivity wrote:
On 01/24/2011 08:06 PM, Glauber Costa wrote:
As a proof of concept to KVM -
On 01/26/2011 07:49 PM, Glauber Costa wrote:
If type becomes implied based on the MSR number, you'd get the best of
both worlds, no?
I do think advertising features in CPUID is nicer than writing to an MSR
and then checking for an ack in the memory region.
Fine. But back to the point,
On 01/26/2011 12:24 PM, Avi Kivity wrote:
On 01/23/2011 01:25 PM, Matteo Signorini wrote:
Hi,
I'm having some problems understanding the sysenter instruction.
As far as I know, in order to successfully call the sysenter
instruction,
MSR_IA32_SYSENTER_CS and MSR_IA32_SYSENTER_EIP registers
Introduce qemu_cpu_kick_self to send SIG_IPI to the calling VCPU
context. First user will be kvm.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
cpus.c| 21 +
qemu-common.h |1 +
2 files changed, 22 insertions(+), 0 deletions(-)
diff --git a/cpus.c
We do not use the timeout, so drop its logic. As we always poll our
signals, we do not need to drop the global lock. Removing those calls
allows some further simplifications. Also fix the error processing of
sigpending at this chance.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
Reviewed-by:
KVM requires to reenter the kernel after IO exits in order to complete
instruction emulation. Failing to do so will leave the kernel state
inconsistently behind. To ensure that we will get back ASAP, we issue a
self-signal that will cause KVM_RUN to return once the pending
operations are
Pure interface cosmetics: Ensure that only kvm core services (as
declared in kvm.h) start with kvm_. Prepend qemu_ to those that
violate this rule in cpus.c. Also rename the corresponding tcg functions
for the sake of consistency.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
cpus.c |
Currently, we only configure and process MCE-related SIGBUS events if
CONFIG_IOTHREAD is enabled. The groundwork is laid, we just need to
factor out the required handler registration and system configuration.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
CC: Huang Ying ying.hu...@intel.com
CC:
The reset we issue on KVM_EXIT_SHUTDOWN implies that we should also
leave the VCPU loop. As we now check for exit_request which is set by
qemu_system_reset_request, this bug is no longer critical. Still it's an
unneeded extra turn.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
kvm-all.c |
Found by Stefan Hajnoczi: There is a race in kvm_cpu_exec between
checking for exit_request on vcpu entry and timer signals arriving
before KVM starts to catch them. Plug it by blocking both timer related
signals also on !CONFIG_IOTHREAD and process those via signalfd.
Signed-off-by: Jan Kiszka
Align with qemu-kvm and prepare for IO exit fix: There is no need to run
kvm_arch_process_irqchip_events in the inner VCPU loop. Any state change
this service processes will first cause an exit from kvm_cpu_exec
anyway. And we will have to reenter the kernel on IO exits
unconditionally, something
No functional changes.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
cpus.c | 97 ++-
1 files changed, 58 insertions(+), 39 deletions(-)
diff --git a/cpus.c b/cpus.c
index 559ec55..f4ec84e 100644
--- a/cpus.c
+++ b/cpus.c
@@
This second round of patches focus on issues in cpus.c, primarily signal
related. The highlights are
- Add missing KVM_RUN continuation after I/O exits
- Fix for timer signal race in KVM entry code under !CONFIG_IOTHREAD
(based on Stefan's findings)
- MCE signal processing under
A pending vmstop request is also a reason to leave the inner main loop.
So far we ignored it, and pending stop requests issued over VCPU threads
were simply ignored.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
vl.c | 14 +-
1 files changed, 5 insertions(+), 9 deletions(-)
If some I/O operation ends up calling qemu_system_reset_request in VCPU
context, we record this and inform the io-thread, but we do not
terminate the VCPU loop. This can lead to fairly unexpected behavior if
the triggering reset operation is supposed to work synchronously.
Fix this for TCG (when
If we call qemu_cpu_kick more than once before the target was able to
process the signal, pthread_kill will fail, and qemu will abort. Prevent
this by avoiding the redundant signal.
This logic can be found in qemu-kvm as well.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
cpu-defs.h |
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
kvm-all.c |1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/kvm-all.c b/kvm-all.c
index 9976762..1a55a10 100644
--- a/kvm-all.c
+++ b/kvm-all.c
@@ -219,6 +219,7 @@ int kvm_init_vcpu(CPUState *env)
mmap_size =
Block SIG_IPI, unblock it during KVM_RUN, just like in io-thread mode.
It's unused so far, but this infrastructure will be required for
self-IPIs and to process SIGBUS plus, in KVM mode, SIGIO and SIGALRM. As
Windows doesn't support signal services, we need to provide a stub for
the init function.
Will be required for SIGBUS handling. For obvious reasons, this will
remain a nop on Windows hosts.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
Makefile.objs |2 +-
cpus.c| 117 +++--
2 files changed, 65 insertions(+), 54
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
cpus.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/cpus.c b/cpus.c
index ceb3a83..cd3f89b 100644
--- a/cpus.c
+++ b/cpus.c
@@ -606,8 +606,8 @@ static void *kvm_cpu_thread_fn(void *arg)
Move {tcg,kvm}_init_ipi and block_io_signals to avoid prototypes, rename
the former two to clarify that they deal with more than SIG_IPI. No
functional changes - except for the tiny fixup of strerror usage.
The forward declaration of sigbus_handler is just temporarily, it will
be moved in a
Provide arch-independent kvm_on_sigbus* stubs to remove the #ifdef'ery
from cpus.c. This patch also fixes --disable-kvm build by providing the
missing kvm_on_sigbus_vcpu kvm-stub.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
CC: Huang Ying ying.hu...@intel.com
CC: Alexander Graf ag...@suse.de
If there is any pending request that requires us to leave the inner loop
if main_loop, makes sure we do this as soon as possible by enforcing
non-blocking IO processing.
At this change, move variable definitions out of the inner loop to
improve readability.
Signed-off-by: Jan Kiszka
Improve the readability of the exit dispatcher by moving the static
return value of kvm_handle_io to its caller.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
kvm-all.c | 17 -
1 files changed, 8 insertions(+), 9 deletions(-)
diff --git a/kvm-all.c b/kvm-all.c
index
On 01/20/2011 05:27 PM, Marcelo Tosatti wrote:
Before patch:
real5m6.493s
user3m57.847s
sys 9m7.115s
real5m1.750s
user4m0.109s
sys 9m10.192s
After patch:
real5m0.140s
user3m57.956s
sys 8m58.339s
real4m56.314s
user4m0.303s
On 01/26/2011 11:56 PM, Rik van Riel wrote:
When running SMP virtual machines, it is possible for one VCPU to be
spinning on a spinlock, while the VCPU that holds the spinlock is not
currently running, because the host scheduler preempted it to run
something else.
Both Intel and AMD CPUs have a
On 01/20/2011 06:11 PM, Marcelo Tosatti wrote:
On Tue, Jan 18, 2011 at 04:08:33AM -0800, Mehul Chadha wrote:
Hi,
I have been trying to get suspending and resuming done across kvm and qemu.
While resuming a suspended state in kvm, an error was generated saying unknown
section kvmclock . I
Found by Stefan Hajnoczi: There is a race in kvm_cpu_exec between
checking for exit_request on vcpu entry and timer signals arriving
before KVM starts to catch them. Plug it by blocking both timer related
signals also on !CONFIG_IOTHREAD and process those via signalfd.
Signed-off-by: Jan Kiszka
Found by Stefan Hajnoczi: There is a race in kvm_cpu_exec between
checking for exit_request on vcpu entry and timer signals arriving
before KVM starts to catch them. Plug it by blocking both timer related
signals also on !CONFIG_IOTHREAD and process those via signalfd.
As this fix depends on real
Reported by Stefan Hajnoczi.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
Build regression of Only read/write MSR_KVM_ASYNC_PF_EN if supported.
target-i386/kvm.c |2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git a/target-i386/kvm.c b/target-i386/kvm.c
index
On Fri, Jan 14, 2011 at 05:01:37PM +0100, Christoph Hellwig wrote:
Wire up the virtio_driver config_changed method to get notified about
config changes raised by the host. For now we just re-read the device
size to support online resizing of devices, but once we add more
attributes that might
This patch parses the input filename in sd_create(), and enables us
specifying a target server to create sheepdog images.
Signed-off-by: MORITA Kazutaka morita.kazut...@lab.ntt.co.jp
---
block/sheepdog.c | 17 ++---
1 files changed, 14 insertions(+), 3 deletions(-)
diff --git
On 01/27/2011 02:09 PM, Jan Kiszka wrote:
Provide arch-independent kvm_on_sigbus* stubs to remove the #ifdef'ery
from cpus.c. This patch also fixes --disable-kvm build by providing the
missing kvm_on_sigbus_vcpu kvm-stub.
Signed-off-by: Jan Kiszkajan.kis...@siemens.com
CC: Huang
On 01/27/11 02:55, Avi Kivity wrote:
On 01/26/2011 06:51 PM, Asdo wrote:
Some time ago in this list it was mentioned that old kernels pre-2.6.28
don't work well with KVM.
(in particular we have a machine with 2.6.24)
pre 2.6.27 kernels don't have mmu notifiers and thus don't handle
On Wed, 2011-01-26 at 17:17 +0200, Michael S. Tsirkin wrote:
I am seeing a similar problem, and am trying to fix that.
My current theory is that this is a variant of a receive livelock:
if the application isn't fast enough to process
incoming data, the guest net stack switches
from prequeue
On Thu, Jan 27, 2011 at 10:44:34AM -0800, Shirley Ma wrote:
On Wed, 2011-01-26 at 17:17 +0200, Michael S. Tsirkin wrote:
I am seeing a similar problem, and am trying to fix that.
My current theory is that this is a variant of a receive livelock:
if the application isn't fast enough to
On Thu, 2011-01-27 at 21:00 +0200, Michael S. Tsirkin wrote:
Interesting. In particular running vhost and the transmitting guest
on the same host would have the effect of slowing down TX.
Does it double the BW for you too?
Running vhost and TX guest on the same host seems not good enough to
On Thu, Jan 27, 2011 at 11:09:00AM -0800, Shirley Ma wrote:
On Thu, 2011-01-27 at 21:00 +0200, Michael S. Tsirkin wrote:
Interesting. In particular running vhost and the transmitting guest
on the same host would have the effect of slowing down TX.
Does it double the BW for you too?
On Thu, 2011-01-27 at 21:31 +0200, Michael S. Tsirkin wrote:
Well slowing down the guest does not sound hard - for example we can
request guest notifications, or send extra interrupts :)
A slightly more sophisticated thing to try is to
poll the vq a bit more aggressively.
For example if we
On Thu, Jan 27, 2011 at 11:45:47AM -0800, Shirley Ma wrote:
On Thu, 2011-01-27 at 21:31 +0200, Michael S. Tsirkin wrote:
Well slowing down the guest does not sound hard - for example we can
request guest notifications, or send extra interrupts :)
A slightly more sophisticated thing to try
On Thu, 2011-01-27 at 22:05 +0200, Michael S. Tsirkin wrote:
Interesting. Could this is be a variant of the now famuous bufferbloat
then?
I guess we could drop some packets if we see we are not keeping up.
For
example if we see that the ring is X% full, we could quickly
complete
Y%
From: Michael S. Tsirkin m...@redhat.com
Date: Thu, 27 Jan 2011 22:05:48 +0200
Interesting. Could this is be a variant of the now famuous bufferbloat then?
Sigh, bufferbloat is the new global warming... :-/
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message
On Thu, 2011-01-27 at 13:02 -0800, David Miller wrote:
Interesting. Could this is be a variant of the now famuous
bufferbloat then?
Sigh, bufferbloat is the new global warming... :-/
Yep, some places become colder, some other places become warmer; Same as
BW results, sometimes faster,
On Thu, 27 Jan 2011 15:29:05 +0800
Huang Ying ying.hu...@intel.com wrote:
Hi, Andrew,
On Thu, 2011-01-20 at 23:50 +0800, Marcelo Tosatti wrote:
On Mon, Jan 17, 2011 at 08:47:39AM +0800, Huang Ying wrote:
Hi, Andrew,
On Sun, 2011-01-16 at 23:35 +0800, Avi Kivity wrote:
On
Turns out this particular file is not a unittest, although
it has a name that complies with the pattern the unittest
script looks for.
Signed-off-by: Lucas Meneghel Rodrigues l...@redhat.com
---
utils/unittest_suite.py |7 ++-
1 files changed, 6 insertions(+), 1 deletions(-)
diff --git
On Thu, 2011-01-27 at 12:56 +0100, André Weidemann wrote:
Hi Alex,
On 26.01.2011 06:12, Alex Williamson wrote:
So while your initial results are promising, my guess is that you're
using card specific drivers and still need to consider some of the
harder problems with generic support
I personally would consider it cleaner to have clearly
defined wrappers instead of complicted flags in the caller.
The number of args to these functions is getting nutty - you'll
probably find that it is beneficial to inline these wrapepr functions, if
the number of callsites is small.
Really
On Fri, 28 Jan 2011 01:57:11 +0100
Andi Kleen a...@firstfloor.org wrote:
I personally would consider it cleaner to have clearly
defined wrappers instead of complicted flags in the caller.
The number of args to these functions is getting nutty - you'll
probably find that it is beneficial
On Thu, Jan 27, 2011 at 4:42 AM, Minchan Kim minchan@gmail.com wrote:
[snip]
index 7b56473..2ac8549 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1660,6 +1660,9 @@ zonelist_scan:
unsigned long mark;
int ret;
+
On Fri, Jan 28, 2011 at 11:56 AM, Balbir Singh
bal...@linux.vnet.ibm.com wrote:
On Thu, Jan 27, 2011 at 4:42 AM, Minchan Kim minchan@gmail.com wrote:
[snip]
index 7b56473..2ac8549 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1660,6 +1660,9 @@ zonelist_scan:
* Christoph Lameter c...@linux.com [2011-01-26 10:57:37]:
Reviewed-by: Christoph Lameter c...@linux.com
Thanks for the review!
--
Three Cheers,
Balbir
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More
* Christoph Lameter c...@linux.com [2011-01-26 10:56:56]:
Reviewed-by: Christoph Lameter c...@linux.com
Thanks for the review!
--
Three Cheers,
Balbir
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More
* MinChan Kim minchan@gmail.com [2011-01-28 14:44:50]:
On Fri, Jan 28, 2011 at 11:56 AM, Balbir Singh
bal...@linux.vnet.ibm.com wrote:
On Thu, Jan 27, 2011 at 4:42 AM, Minchan Kim minchan@gmail.com wrote:
[snip]
index 7b56473..2ac8549 100644
--- a/mm/page_alloc.c
+++
https://bugzilla.kernel.org/show_bug.cgi?id=27052
--- Comment #16 from prochazka prochazka.nico...@gmail.com 2011-01-28
06:58:03 ---
Et voilà :
Jan 28 01:28:18 bergson25412 rmap_remove: 88011ce3fff8 1-BUG
Jan 28 01:28:18 bergson25412 [ cut here ]
Jan 28
Introduce migrate_ft_trans_put_ready() which kicks the FT transaction
cycle. When ft_mode is on, migrate_fd_put_ready() would open
ft_trans_file and turn on event_tap. To end or cancel FT transaction,
ft_mode and event_tap is turned off. migrate_ft_trans_get_ready() is
called to receive ack
event-tap function is called only when it is on, and requests were
sent from device emulators.
Signed-off-by: Yoshiaki Tamura tamura.yoshi...@lab.ntt.co.jp
---
block.c | 15 +++
1 files changed, 15 insertions(+), 0 deletions(-)
diff --git a/block.c b/block.c
index ff2795b..e4df9b6
Hi,
This patch series is a revised version of Kemari for KVM, which
applied comments for the previous post. The current code is based on
qemu.git 0bfe006c5380c5f8a485a55ded3329fbbc224396.
The changes from v0.2.7 - v0.2.8 are:
- fixed calling wrong cb in event-tap
- add missing qemu_aio_release
Currently FdMigrationState doesn't support read(), and this patch
introduces it to get response from the other side.
Signed-off-by: Yoshiaki Tamura tamura.yoshi...@lab.ntt.co.jp
---
migration-tcp.c | 15 +++
migration.c | 13 +
migration.h |3 +++
3 files
Signed-off-by: Yoshiaki Tamura tamura.yoshi...@lab.ntt.co.jp
---
qemu-char.c |2 +-
qemu_socket.h |1 +
2 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/qemu-char.c b/qemu-char.c
index edc9ad6..737d347 100644
--- a/qemu-char.c
+++ b/qemu-char.c
@@ -2116,7 +2116,7 @@ static
1 - 100 of 117 matches
Mail list logo