On 11/20/2015 04:40 PM, Takuya Yoshikawa wrote:
It seems like you all are busy now, so I've made this patch set so that
mechanical and trivial changes come before.
V2->V3:
Patch 01: Rebased and moved here. Updated stale comments.
We may also want to use a union, inside the struct, to
On 12 November 2015 at 16:20, Alex Bennée wrote:
> As we haven't always had guest debug support we need to probe for it.
> Additionally we don't do this in the start-up capability code so we
> don't fall over on old kernels.
>
> Signed-off-by: Alex Bennée
On 20 November 2015 at 15:05, Peter Maydell wrote:
> On 12 November 2015 at 16:20, Alex Bennée wrote:
>> As we haven't always had guest debug support we need to probe for it.
>> Additionally we don't do this in the start-up capability code so we
On 11/20/2015 02:47 AM, Stephen Rothwell wrote:
> Hi all,
>
> Today's linux-next merge of the kvms390 tree got a conflict in:
>
> include/linux/kvm_host.h
> arch/s390/kvm/interrupt.c
> arch/s390/kvm/sigp.c
>
> between commits:
>
> db27a7a37aa0 ("KVM: Provide function for VCPU lookup by
Peter Maydell writes:
> On 20 November 2015 at 15:05, Peter Maydell wrote:
>> On 12 November 2015 at 16:20, Alex Bennée wrote:
>>> As we haven't always had guest debug support we need to probe for it.
>>> Additionally
On 20/11/2015 09:40, Takuya Yoshikawa wrote:
> About patch 03: There was a comment on the usage of braces for a single line
> else-if statement from Xiao. As I answered, checkpatch did not complain
> about
> this, and when the corresponding if block has multiple lines, some
> developers
On Mon, Nov 16, 2015 at 01:11:41PM +, Marc Zyngier wrote:
> Implement the vgic-v2 save restore as a direct translation of
> the assembly code version.
Hi Marc,
I have one comment below:
Cheers,
--
Steve
>
> Signed-off-by: Marc Zyngier
> ---
>
On 12 November 2015 at 16:20, Alex Bennée wrote:
> These don't involve messing around with debug registers, just setting
> the breakpoint instruction in memory. GDB will not use this mechanism if
> it can't access the memory to write the breakpoint.
>
> All the kernel has
On 20/11/2015 09:11, Thomas Huth wrote:
> In the old DABR register, the BT (Breakpoint Translation) bit
> is bit number 61. In the new DAWRX register, the WT (Watchpoint
> Translation) bit is bit number 59. So to move the DABR-BT bit
> into the position of the DAWRX-WT bit, it has to be shifted
On 12 November 2015 at 16:20, Alex Bennée wrote:
> This adds basic support for HW assisted debug. The ioctl interface to
> KVM allows us to pass an implementation defined number of break and
> watch point registers. When KVM_GUESTDBG_USE_HW is specified these
> debug
Hi Steve,
On 20/11/15 15:22, Steve Capper wrote:
> On Mon, Nov 16, 2015 at 01:11:41PM +, Marc Zyngier wrote:
>> > Implement the vgic-v2 save restore as a direct translation of
>> > the assembly code version.
> Hi Marc,
> I have one comment below:
>
> Cheers,
> -- Steve
>> >
>> >
On 12 November 2015 at 16:20, Alex Bennée wrote:
> From: Alex Bennée
>
> The aim of these tests is to combine with an appropriate kernel
> image (with symbol-file vmlinux) and check it behaves as it should.
> Given a kernel it checks:
>
> - single step
On 12 November 2015 at 16:20, Alex Bennée wrote:
> From: Alex Bennée
>
> If we can't find details for the debug exception in our debug state
> then we can assume the exception is due to debugging inside the guest.
> To inject the exception into the guest
On Mon, Nov 16, 2015 at 01:11:42PM +, Marc Zyngier wrote:
> Implement the vgic-v3 save restore as a direct translation of
> the assembly code version.
I think there's a couple of typos below Marc.
>
> Signed-off-by: Marc Zyngier
> ---
> arch/arm64/kvm/hyp/Makefile
On 12 November 2015 at 16:20, Alex Bennée wrote:
> This adds support for single-step. There isn't much to do on the QEMU
> side as after we set-up the request for single step via the debug ioctl
> it is all handled within the kernel.
>
> Signed-off-by: Alex Bennée
On 20/11/2015 09:11, Thomas Huth wrote:
> In the old DABR register, the BT (Breakpoint Translation) bit
> is bit number 61. In the new DAWRX register, the WT (Watchpoint
> Translation) bit is bit number 59. So to move the DABR-BT bit
> into the position of the DAWRX-WT bit, it has to be shifted
On Thu, Nov 19, 2015 at 04:15:48PM +, Xie, Huawei wrote:
> On 11/18/2015 12:28 PM, Venkatesh Srinivas wrote:
> > On Tue, Nov 17, 2015 at 08:08:18PM -0800, Venkatesh Srinivas wrote:
> >> On Mon, Nov 16, 2015 at 7:46 PM, Xie, Huawei wrote:
> >>
> >>> On 11/14/2015 7:41 AM,
On Tue, Nov 10, 2015 at 11:54:22AM -0500, Andrew Jones wrote:
> On Tue, Nov 10, 2015 at 05:38:38PM +0100, Paolo Bonzini wrote:
> >
> >
> > On 06/11/2015 01:24, Andrew Jones wrote:
> > > Many of these patches were posted once. Some weren't, but anyway
> > > almost everything is pretty trivial.
On 20/11/15 16:48, Steve Capper wrote:
> On Mon, Nov 16, 2015 at 01:11:42PM +, Marc Zyngier wrote:
>> Implement the vgic-v3 save restore as a direct translation of
>> the assembly code version.
>
> I think there's a couple of typos below Marc.
[...]
>> +case 10:
>> +
From: Borislav Petkov
It looks like this in action:
kvm [5197]: vcpu0, guest rIP: 0x810187ba unhandled rdmsr: 0xc001102
and helps to pinpoint quickly where in the guest we did the unsupported
thing.
Signed-off-by: Borislav Petkov
---
You just ignored my comment on the previous version...
On 11/20/2015 04:47 PM, Takuya Yoshikawa wrote:
kvm_mmu_mark_parents_unsync() alone uses pte_list_walk(), witch does
nearly the same as the for_each_rmap_spte macro. The only difference
is that is_shadow_present_pte() checks cannot be
You can move this patch to the front of
[PATCH 08/10] KVM: x86: MMU: Use for_each_rmap_spte macro instead of
pte_list_walk()
By moving kvm_mmu_mark_parents_unsync() to the behind of mmu_spte_set() (then
the parent
spte is present now), you can directly clean up for_each_rmap_spte().
On
On 2015/11/20 17:46, Xiao Guangrong wrote:
You just ignored my comment on the previous version...
I'm sorry but please read the explanation in patch 00.
I've read your comments and I'm not ignoring you.
Since this patch set has become huge than expected, I'm sending
this version so that
From: Borislav Petkov
Software Error Recovery, i.e. SER, is purely an Intel feature and it
shouldn't be set by default. Enable it only on Intel.
Signed-off-by: Borislav Petkov
---
target-i386/cpu.c | 7 ---
target-i386/cpu.h | 9 -
target-i386/kvm.c | 5
Hi,
CC'ing qemu-devel.
Am 21.11.2015 um 00:01 schrieb Borislav Petkov:
> From: Borislav Petkov
>
> Software Error Recovery, i.e. SER, is purely an Intel feature and it
> shouldn't be set by default. Enable it only on Intel.
Is this new in 2.5? Otherwise we would probably need
On Sat, Nov 21, 2015 at 12:11:35AM +0100, Andreas Färber wrote:
> Hi,
>
> CC'ing qemu-devel.
Ah, thanks.
> Am 21.11.2015 um 00:01 schrieb Borislav Petkov:
> > From: Borislav Petkov
> >
> > Software Error Recovery, i.e. SER, is purely an Intel feature and it
> > shouldn't be set
In the old DABR register, the BT (Breakpoint Translation) bit
is bit number 61. In the new DAWRX register, the WT (Watchpoint
Translation) bit is bit number 59. So to move the DABR-BT bit
into the position of the DAWRX-WT bit, it has to be shifted by
two, not only by one. This fixes hardware
In the old DABR register, the BT (Breakpoint Translation) bit
is bit number 61. In the new DAWRX register, the WT (Watchpoint
Translation) bit is bit number 59. So to move the DABR-BT bit
into the position of the DAWRX-WT bit, it has to be shifted by
two, not only by one. This fixes hardware
On Thu, Nov 19, 2015 at 11:38:06PM +, David Woodhouse wrote:
> On Thu, 2015-11-19 at 13:59 -0800, Andy Lutomirski wrote:
> >
> > >
> > > So thinking hard about it, I don't see any real drawbacks to making this
> > > conditional on a new feature bit, that Xen can then set..
> >
> > Can you
It seems like you all are busy now, so I've made this patch set so that
mechanical and trivial changes come before.
V2->V3:
Patch 01: Rebased and moved here. Updated stale comments.
We may also want to use a union, inside the struct, to eliminate casting to
(u64 *) type when spte is in the
New struct kvm_rmap_head makes the code type-safe to some extent.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/include/asm/kvm_host.h | 8 +-
arch/x86/kvm/mmu.c | 196
arch/x86/kvm/mmu_audit.c|
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 12
1 file changed, 4 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index d9a6801..8a1593f 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@
mmu_set_spte()'s code is based on the assumption that the emulate
parameter has a valid pointer value if set_spte() returns true and
write_fault is not zero. In other cases, emulate may be NULL, so a
NULL-check is needed.
Stop passing emulate pointer and make mmu_set_spte() return the emulate
Both __mmu_unsync_walk() and mmu_pages_clear_parents() have three line
code which clears a bit in the unsync child bitmap; the former places it
inside a loop block and uses a few goto statements to jump to it.
A new helper function, clear_unsync_child_bit(), makes the code cleaner.
is_rmap_spte(), originally named is_rmap_pte(), was introduced when the
simple reverse mapping was implemented by commit cd4a4e5374110444
("[PATCH] KVM: MMU: Implement simple reverse mapping"). At that point,
its role was clear and only rmap_add() and rmap_remove() were using it
to select sptes
At some call sites of rmap_get_first() and rmap_get_next(), BUG_ON is
placed right after the call to detect unrelated sptes which must not be
found in the reverse-mapping list.
Move this check in rmap_get_first/next() so that all call sites, not
just the users of the for_each_rmap_spte() macro,
On Fri, Nov 20, 2015 at 01:56:39PM +1100, Benjamin Herrenschmidt wrote:
> On Thu, 2015-11-19 at 23:38 +, David Woodhouse wrote:
> >
> > I understand that POWER and other platforms don't currently have a
> > clean way to indicate that certain device don't have translation. And I
> > understand
Make kvm_mmu_alloc_page() do just what its name tells to do, and remove
the extra allocation error check and zero-initialization of parent_ptes:
shadow page headers allocated by kmem_cache_zalloc() are always in the
per-VCPU pools.
Signed-off-by: Takuya Yoshikawa
kvm_mmu_mark_parents_unsync() alone uses pte_list_walk(), witch does
nearly the same as the for_each_rmap_spte macro. The only difference
is that is_shadow_present_pte() checks cannot be placed there because
kvm_mmu_mark_parents_unsync() can be called with a new parent pointer
whose entry is not
Every time kvm_mmu_get_page() is called with a non-NULL parent_pte
argument, link_shadow_page() follows that to set the parent entry so
that the new mapping will point to the returned page table.
Moving parent_pte handling there allows to clean up the code because
parent_pte is passed to
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 20 +++-
arch/x86/kvm/paging_tmpl.h | 4 ++--
2 files changed, 9 insertions(+), 15 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index b020323..9baf884 100644
Hi There!
After installation of the Windows2k8R2 like Guest and all drivers, the
cpu frequency stay in 100%.
I'm using QEMU emulator version 2.0.0 (Debian 2.0.0+dfsg-2ubuntu1.19)
on Ubuntu 14.04
See the attached about that.
Anybody can help me ?
Thanks.
Thiago Oliveira
--
To unsubscribe from
42 matches
Mail list logo