2011/11/30 Cam Macdonell c...@cs.ualberta.ca:
2011/11/30 Zang Hongyong zanghongy...@huawei.com:
Can this bug fix patch be applied yet?
Sorry, for not replying yet. I'll test your patch within the next day.
Have you confirmed the proper receipt of interrupts in the receiving guests?
I can
On Thu, Dec 01, 2011 at 08:30:18PM +0200, Sasha Levin wrote:
Currently we silently fail if SVM is already in use by a different
virtualization technology.
This is bad since it's non-obvious for the user, and its not too uncommon
for users to have several of these installed on same host.
On 01.12.2011 14:33, Avi Kivity wrote:
Okay, I read the code and I even think I understood a little bit of it.
In general the patches look okay, I had only minor comments. But please
do document all the new interfaces.
Since when do we have api documentation? Tssj, kvm has grown up since last
On 02.12.2011 03:33, Alex,Shi wrote:
On Mon, 2011-11-21 at 17:00 +0800, Alex,Shi wrote:
On Thu, 2011-10-20 at 16:38 +0800, Eric Dumazet wrote:
Le jeudi 20 octobre 2011 à 15:32 +0800, Alex,Shi a écrit :
percpu_xxx funcs are duplicated with this_cpu_xxx funcs, so replace them
for further code
On 12/02/2011 01:20 AM, Raghavendra K T wrote:
+ struct kvm_mp_state mp_state;
+
+ mp_state.mp_state = KVM_MP_STATE_RUNNABLE;
+ if (vcpu) {
+ vcpu-kicked = 1;
+ /* Ensure kicked is always set before wakeup */
+ barrier();
+ }
+ kvm_arch_vcpu_ioctl_set_mpstate(vcpu,mp_state);
This must only be
On Thu, Dec 1, 2011 at 3:25 PM, Alex Williamson
alex.william...@redhat.com wrote:
On Thu, 2011-12-01 at 14:58 -0600, Stuart Yoder wrote:
One other mechanism we need as well is the ability to
enable/disable a domain.
For example-- suppose a device is assigned to a VM, the
device is in use
This was probably copypasted from the cr0 case, but it's unneeded here.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
arch/x86/kvm/emulate.c |3 ---
1 files changed, 0 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
index ac8e5ed..f641201
freed_pages is never evaluated, so remove it as well as the return code
kvm_mmu_remove_some_alloc_mmu_pages so far delivered to its only user.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
arch/x86/kvm/mmu.c | 12 ++--
1 files changed, 6 insertions(+), 6 deletions(-)
diff --git
-Original Message-
From: kvm-ow...@vger.kernel.org [mailto:kvm-ow...@vger.kernel.org] On
Behalf Of Stuart Yoder
Sent: Friday, December 02, 2011 8:11 PM
To: Alex Williamson
Cc: Alexey Kardashevskiy; aafab...@cisco.com; kvm@vger.kernel.org;
p...@au1.ibm.com; qemu-de...@nongnu.org;
On 12/02/2011 08:40 AM, Stuart Yoder wrote:
On Thu, Dec 1, 2011 at 3:25 PM, Alex Williamson
alex.william...@redhat.com wrote:
On Thu, 2011-12-01 at 14:58 -0600, Stuart Yoder wrote:
One other mechanism we need as well is the ability to
enable/disable a domain.
For example-- suppose a device
On 12/02/2011 12:11 PM, Bhushan Bharat-R65777 wrote:
How do we determine whether guest is ready or not? There can be multiple
device get ready at different time.
The guest makes a hypercall with a device handle -- at least that's how
we do it in Topaz.
Further if guest have given the device
On 2011-12-02 07:26, Liu Ping Fan wrote:
From: Liu Ping Fan pingf...@linux.vnet.ibm.com
Currently, vcpu can be destructed only when kvm instance destroyed.
Change this to vcpu's destruction taken when its refcnt is zero,
and then vcpu MUST and CAN be destroyed before kvm's destroy.
I'm
-Original Message-
From: Wood Scott-B07421
Sent: Friday, December 02, 2011 11:57 PM
To: Bhushan Bharat-R65777
Cc: Stuart Yoder; Alex Williamson; Alexey Kardashevskiy;
aafab...@cisco.com; kvm@vger.kernel.org; p...@au1.ibm.com; qemu-
de...@nongnu.org; joerg.roe...@amd.com;
-Original Message-
From: Wood Scott-B07421
Sent: Friday, December 02, 2011 11:57 PM
To: Bhushan Bharat-R65777
Cc: Stuart Yoder; Alex Williamson; Alexey Kardashevskiy;
aafab...@cisco.com; kvm@vger.kernel.org; p...@au1.ibm.com; qemu-
de...@nongnu.org; joerg.roe...@amd.com;
On 12/02/2011 12:45 PM, Bhushan Bharat-R65777 wrote:
Scott, I am not sure if there is any real use case where device needed to
assigned beyond 2 level (host + immediate guest) in nested virtualization.
Userspace drivers in the guest is a more likely scenario than nested
virtualization, at
Often when a guest is stopped from the qemu console, it will report spurious
soft lockup warnings on resume. There are kernel patches being discussed that
will give the host the ability to tell the guest that it is being stopped and
should ignore the soft lockup warning that generates.
KVM autotest relies on a system that listens network traffic
and registers DHCP leases, so we know what is the IP for a
given guest network card. However, the terminology used on
messages is highly specific to the internal implementation.
So let's use messages with a terminology that can be more
Make the remote login code only print messages if a debug
flag is turned on. This way we can get rid of many lines that
may clutter debug logs that are only really needed in special
occasions.
Signed-off-by: Lucas Meneghel Rodrigues l...@redhat.com
---
client/virt/virt_utils.py | 22
It's possible that the guest kernel dies during migration,
this will lead in many time errors like:
Unhandled VMAddressVerificationError: Cannot verify MAC-IP address
mapping using arping: 9a:95:62:c5:0d:c0 --- 192.168.122.68
Since in fact what happened is that the guest died, hence
it's unable
On 2011-12-02 20:19, Eric B Munson wrote:
Often when a guest is stopped from the qemu console, it will report spurious
soft lockup warnings on resume. There are kernel patches being discussed that
will give the host the ability to tell the guest that it is being stopped and
should ignore the
On Fri, 02 Dec 2011, Jan Kiszka wrote:
On 2011-12-02 20:19, Eric B Munson wrote:
Often when a guest is stopped from the qemu console, it will report spurious
soft lockup warnings on resume. There are kernel patches being discussed
that
will give the host the ability to tell the guest
Hi All,
Scientific Linux 6.1 x64
qemu-kvm-0.12.1.2-2.160.el6_1.2.x86_64
My XP-Pro guest will only let me use two CPUs.
Is there a way I can tell Virt-Manager to use
one CPU with four cores instead of four separate
CPUs?
Many thanks,
-T
--
To unsubscribe from this list: send the line
Hi All,
Scientific Linux 6.1 x64
qemu-kvm-0.12.1.2-2.160.el6_1.2.x86_64
I do not have USB configured in any of my guests.
Since adding KVM to my host, my USB 2.0 ports on
my host are all acting like USB 1.0 ports (20 times
slower).
Is there a work around for this?
Many thanks,
-T
--
To
On Tue, 2011-11-29 at 14:31 +0200, Ohad Ben-Cohen wrote:
Virtio is using memory barriers to control the ordering of
references to the vrings on SMP systems. When the guest is compiled
with SMP support, virtio is only using SMP barriers in order to
avoid incurring the overhead involved with
On 12/2/2011 4:27 PM, Todd And Margo Chester wrote:
Hi All,
Scientific Linux 6.1 x64
qemu-kvm-0.12.1.2-2.160.el6_1.2.x86_64
My XP-Pro guest will only let me use two CPUs.
Is there a way I can tell Virt-Manager to use
one CPU with four cores instead of four separate
CPUs?
Don't know about
Hi All,
Scientific Linux 6.1 x64
qemu-kvm-0.12.1.2-2.160.el6_1.2.x86_64
My XP guest shuts down without issue. My Windows 7 Pro
guest always BSODs with IRQ less than or equal
Anyone know of a work around to this?
Many thanks,
-T
--
To unsubscribe from this list: send the line unsubscribe kvm
Avi Kivity a...@redhat.com wrote:
That's true. But some applications do require low latency, and the
current code can impose a lot of time with the mmu spinlock held.
The total amount of work actually increases slightly, from O(N) to O(N
log N), but since the tree is so wide, the overhead
On Sat, Dec 3, 2011 at 1:09 AM, Benjamin Herrenschmidt
b...@kernel.crashing.org wrote:
Have you measured the impact of using normal barriers (non-SMP ones)
like we use on normal HW drivers unconditionally ?
IE. If the difference is small enough I'd say just go for it and avoid
the bloat.
I
28 matches
Mail list logo