From: Avi Kivity a...@redhat.com
Signed-off-by: Avi Kivity a...@redhat.com
diff --git a/qemu/pc-bios/bios.bin b/qemu/pc-bios/bios.bin
index 768d8f0..0e0dbea 100644
Binary files a/qemu/pc-bios/bios.bin and b/qemu/pc-bios/bios.bin differ
--
To unsubscribe from this list: send the line unsubscribe
From: Avi Kivity a...@redhat.com
Signed-off-by: Avi Kivity a...@redhat.com
diff --git a/configure b/configure
index d2883a7..493c178 100755
--- a/configure
+++ b/configure
@@ -125,6 +125,9 @@ if [ -n $no_uname ]; then
elif [ -e $kerneldir/include/config/kernel.release ]; then
From: Marcelo Tosatti mtosa...@redhat.com
kvm_load_registers is a general interface to load registers, and is
used by vmport, gdbstub, etc. The TSC MSR is continually counting, so
it can't be simply read and written back as the other registers/MSR's
(doing so overwrites the current count).
From: Marcelo Tosatti mtosa...@redhat.com
The TSC is zeroed at RESET, and not at SMP initialization.
This avoids the TSC from going out of sync between vcpu's on SMP
guests.
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
Signed-off-by: Avi Kivity a...@redhat.com
diff --git
From: Avi Kivity a...@redhat.com
Using a for_each loop style removes the need to write callback and nasty
casts.
Implement the walk_shadow() using the for_each_shadow_entry().
Signed-off-by: Avi Kivity a...@redhat.com
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 3b86df6..3248a3e
From: Avi Kivity a...@redhat.com
Eliminating a callback and a useless structure.
Signed-off-by: Avi Kivity a...@redhat.com
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 3248a3e..b4b79b0 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1841,67 +1841,42 @@ static void
From: Avi Kivity a...@redhat.com
Effectively reverting to the pre walk_shadow() version -- but now
with the reusable for_each().
Signed-off-by: Avi Kivity a...@redhat.com
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 9fd78b6..69c7e33 100644
---
From: Avi Kivity a...@redhat.com
No longer used.
Signed-off-by: Avi Kivity a...@redhat.com
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index b4b79b0..31ebe69 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -145,11 +145,6 @@ struct kvm_rmap_desc {
struct
From: Avi Kivity a...@redhat.com
Signed-off-by: Avi Kivity a...@redhat.com
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 69c7e33..46b68f9 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -25,7 +25,6 @@
#if PTTYPE == 64
#define
From: Marcelo Tosatti mtosa...@redhat.com
VMX initializes the TSC offset for each vcpu at different times, and
also reinitializes it for vcpus other than 0 on APIC SIPI message.
This bug causes the TSC's to appear unsynchronized in the guest, even if
the host is good.
Older Linux kernels don't
Marcelo Tosatti wrote:
On Thu, Dec 25, 2008 at 03:23:34PM +0200, Avi Kivity wrote:
This patchset replaces walk_shadow(), which calls a callback for each
shadow pte that maps a guest virtal address, by an equivalent for_each style
construct. Benefits are less thunks and smaller code.
Please
Hi All,
This is our Weekly KVM Testing Report against lastest kvm.git
4f27e3e32b71dd1beff3ace80cce5fb48887a8be and kvm-userspace.git
6b3523a08a01c1411b33690d66cf29920b0d09ce.
SMP Vista booting issue is fixed. 32e guest is found crash with Live Migration.
One New issue:
Marcelo Tosatti wrote:
Ok, this could cause the guest tsc to be initialized to a high value
close to wraparound (in case the vcpu is migrated to a cpu with negative
difference before vmx_vcpu_setup).
Applied, thanks.
--
error compiling committee.c: too many arguments to function
--
To
Marcelo Tosatti wrote:
Most Intel hosts are supposed to have their TSC's synchronized. This
patchset attempts to fix the sites which overwrite the TSC making them
appear unsynchronized to the guest.
Applied the userspace bits as well. I dropped qemu-kvm-x86.h; these
files are going away,
The changelog from kvm-81 to kvm-82 says:
- much improved guest debugging (Jan Kiszka)
- both debugger in guest and debugger in host
I haven't tested it much, but I can confirm that debugging tricks I knew
didn't work with kvm-81 and before now work fine. Malware which wouldn't
even run now
On Sat, Dec 27, 2008 at 05:27:25PM -0200, Marcelo Tosatti wrote:
+static int kvm_vm_ioctl_request_gsi_msg(struct kvm *kvm,
+ struct kvm_assigned_gsi_msg
*agsi_msg)
+{
+ struct kvm_gsi_msg gsi_msg;
+ int r;
+
+
On Sat, Dec 27, 2008 at 06:06:26PM -0200, Marcelo Tosatti wrote:
On Fri, Dec 26, 2008 at 10:30:07AM +0800, Sheng Yang wrote:
Thanks to Marcelo's observation, The following code have potential issue:
if (cancel_work_sync(assigned_dev-interrupt_work))
kvm_put_kvm(kvm);
In fact,
Sheng Yang wrote:
if (cancel_work_sync(assigned_dev-interrupt_work))
kvm_put_kvm(kvm);
In fact, cancel_work_sync() would return true either work struct is only
scheduled or the callback of work struct is executed. This code only
consider the former situation.
Why not simply drop
Hello List,
I was planning to introduce kvm in our server farm to give customers the
possibility to load their OS into kvm in case they did something wrong and
want to fix that.
We have several 100 servers with different Linux distros, but always the
same hardware and kernel. Our hard drives are
On 12/27/08, Lennert Buytenhek buyt...@wantstofly.org wrote:
On Sat, Dec 27, 2008 at 07:52:39PM +, Frederik Himpe wrote:
Mandriva is now using the -Werror=format-security CFLAG by default. To
make kvm 82 compile with this option, I had to apply this patch:
(restoring cc list)
Andi Kleen wrote:
One of the other problems: NMIs and MCEs have the same problem with SYSCALL
This one however looks unsolvable. Userspace can point %rsp into
arbitrary memory, issue a syscall, and hope for an nmi. Since we're in
cpl 0 and are not using IST, the
On Tue, Nov 25, 2008 at 01:52:59PM +0100, Andi Kleen wrote:
But yeah - the remapping of HPET timers to virtual HPET timers sounds
pretty tough. I wonder if one could overcome that with a little
hardware support though ...
For gettimeofday better make TSC work. Even in the best case
On Sun, Dec 28, 2008 at 04:09:26PM +0200, Avi Kivity wrote:
I don't see how syscall could work on i386, and indeed:
i386 has task gates which support unconditional stack switching. But there
are no 64bit task gates, just ISTs.
BTW I think there are more similar problems in your patch too.
Andi Kleen wrote:
On Sun, Dec 28, 2008 at 04:09:26PM +0200, Avi Kivity wrote:
I don't see how syscall could work on i386, and indeed:
i386 has task gates which support unconditional stack switching. But there
are no 64bit task gates, just ISTs.
i386 is not that interesting to
One fatal problem is enough -- I don't thing that patch can be made to
work. Pity since it did clean up some stuff.
Not sure that was true anyways.
I would like however to speed up kvm. Here's a plan:
1. Add per-cpu IDT
You don't need that, do you? Just two sets.
2. When switching
Avi Kivity wrote:
1. Add per-cpu IDT
Or we could have just two IDTs - one with IST and one without. I
clocked LIDT at 58 cycles (and we need two per heavyweight switch), so
it's not that wonderful.
--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to
Avi Kivity wrote:
Avi Kivity wrote:
1. Add per-cpu IDT
Or we could have just two IDTs - one with IST and one without. I
clocked LIDT at 58 cycles (and we need two per heavyweight switch), so
it's not that wonderful.
This makes the whole thing unworthwhile. The vmload/vmsave pair costs
On Sun, Dec 28, 2008 at 10:08:35PM +0200, Avi Kivity wrote:
Avi Kivity wrote:
Avi Kivity wrote:
1. Add per-cpu IDT
Or we could have just two IDTs - one with IST and one without. I
clocked LIDT at 58 cycles (and we need two per heavyweight switch), so
it's not that wonderful.
This
Andi Kleen wrote:
This makes the whole thing unworthwhile. The vmload/vmsave pair costs
only 200 cycles (I should have started with this), and 120 cycles on the
heavyweight path plus complexity are not worth 200 cycles on the
lightweight path.
Actually to switch ISTs you need to change
Remove the vmap usage from kvm, this is needed both for ksm and
get_user_pages != write.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
This commit change the name of emulator_read_std into kvm_read_guest_virt,
and add new function name kvm_write_guest_virt that allow writing into a
guest virtual address.
Signed-off-by: Izik Eidus iei...@redhat.com
---
arch/x86/include/asm/kvm_host.h |4 ---
arch/x86/kvm/x86.c |
Signed-off-by: Izik Eidus iei...@redhat.com
---
arch/x86/kvm/x86.c| 62 +---
include/linux/kvm_types.h |3 +-
2 files changed, 14 insertions(+), 51 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index c812209..29df564
Izik Eidus wrote:
Remove the vmap usage from kvm, this is needed both for ksm and
get_user_pages != write.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi Avi,
Thanks for your comments. I've updated the patch according to them.
Please review it. Thank you.
Load assigned devices' PCI option ROMs to the RAM of
guest OS. And pass the corresponding devfns to BIOS.
Signed-off-by: Kechao Liu kechao@intel.com
---
bios/rombios.c |
On Sun, Dec 28, 2008 at 07:24:02PM +0800, Sheng Yang wrote:
On Sat, Dec 27, 2008 at 06:06:26PM -0200, Marcelo Tosatti wrote:
On Fri, Dec 26, 2008 at 10:30:07AM +0800, Sheng Yang wrote:
Thanks to Marcelo's observation, The following code have potential issue:
if
35 matches
Mail list logo