We don't emulate breakpoints yet, so just ignore reads and writes
to / from DABR.
This fixes booting of more recent Linux guest kernels for me.
Reported-by: Nello Martuscielli ppc.ad...@gmail.com
Tested-by: Nello Martuscielli ppc.ad...@gmail.com
Signed-off-by: Alexander Graf ag...@suse.de
---
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Older version of power architecture use Real Mode Offset register and Real Mode
Limit
Selector for mapping guest Real Mode Area. The guest RMA should be physically
contigous since we use the range when address translation is not enabled.
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Both RMA and hash page table request will be a multiple of 256K. We can use
a chunk size of 256K to track the free/used 256K chunk in the bitmap. This
should help to reduce the bitmap size.
Signed-off-by: Aneesh Kumar K.V
From: Chen Gang gang.c...@asianux.com
'rmls' is 'unsigned long', lpcr_rmls() will return negative number when
failure occurs, so it need a type cast for comparing.
'lpid' is 'unsigned long', kvmppc_alloc_lpid() return negative number
when failure occurs, so it need a type cast for comparing.
From: Paul Mackerras pau...@samba.org
Unlike the other general-purpose SPRs, SPRG3 can be read by usermode
code, and is used in recent kernels to store the CPU and NUMA node
numbers so that they can be read by VDSO functions. Thus we need to
load the guest's SPRG3 value into the real SPRG3
From: Paul Mackerras pau...@samba.org
Commit 8e44ddc3f3 (powerpc/kvm/book3s: Add support for H_IPOLL and
H_XIRR_X in XICS emulation) added a call to get_tb() but didn't
include the header that defines it, and on some configs this means
book3s_xics.c fails to compile:
From: Paul Mackerras pau...@samba.org
This reworks kvmppc_mmu_book3s_64_xlate() to make it check the large
page bit in the hashed page table entries (HPTEs) it looks at, and
to simplify and streamline the code. The checking of the first dword
of each HPTE is now done with a single mask and
From: Paul Mackerras pau...@samba.org
Currently the code assumes that once we load up guest FP/VSX or VMX
state into the CPU, it stays valid in the CPU registers until we
explicitly flush it to the thread_struct. However, on POWER7,
copy_page() and memcpy() can use VMX. These functions do flush
From: Scott Wood scottw...@freescale.com
kvm_guest_enter() was already called by kvmppc_prepare_to_enter().
Don't call it again.
Signed-off-by: Scott Wood scottw...@freescale.com
Signed-off-by: Alexander Graf ag...@suse.de
---
arch/powerpc/kvm/booke.c | 2 --
1 file changed, 2 deletions(-)
Hi Paolo / Gleb,
This is my current patch queue for ppc. Please pull.
Changes include:
- Book3S HV: CMA based memory allocator for linear memory
- A few bug fixes
Alex
The following changes since commit cc2df20c7c4ce594c3e17e9cc260c330646012c8:
KVM: x86: Update symbolic exit codes
From: Paul Mackerras pau...@samba.org
It turns out that if we exit the guest due to a hcall instruction (sc 1),
and the loading of the instruction in the guest exit path fails for any
reason, the call to kvmppc_ld() in kvmppc_get_last_inst() fetches the
instruction after the hcall instruction
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Powerpc architecture uses a hash based page table mechanism for mapping virtual
addresses to physical address. The architecture require this hash page table to
be physically contiguous. With KVM on Powerpc currently we use early reservation
From: Thadeu Lima de Souza Cascardo casca...@linux.vnet.ibm.com
err was overwritten by a previous function call, and checked to be 0. If
the following page allocation fails, 0 is going to be returned instead
of -ENOMEM.
Signed-off-by: Thadeu Lima de Souza Cascardo casca...@linux.vnet.ibm.com
From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
We want to use CMA for allocating hash page table and real mode area for
PPC64. Hence move DMA contiguous related changes into a seperate config
so that ppc64 can enable CMA without requiring DMA contiguous.
Acked-by: Michal Nazarewicz
From: Scott Wood scottw...@freescale.com
Currently this is only being done on 64-bit. Rather than just move it
out of the 64-bit ifdef, move it to kvm_lazy_ee_enable() so that it is
consistent with lazy ee state, and so that we don't track more host
code as interrupts-enabled than necessary.
From: Paul Mackerras pau...@samba.org
This corrects the usage of the tlbie (TLB invalidate entry) instruction
in HV KVM. The tlbie instruction changed between PPC970 and POWER7.
On the PPC970, the bit to select large vs. small page is in the instruction,
not in the RB register value. This
On 29.08.2013, at 07:04, Paul Mackerras wrote:
On Thu, Aug 29, 2013 at 12:00:53AM +0200, Alexander Graf wrote:
On 06.08.2013, at 06:16, Paul Mackerras wrote:
kvm_start_lightweight:
+ /* Copy registers into shadow vcpu so we can access them in real mode */
+ GET_SHADOW_VCPU(r3)
+
On 29.08.2013, at 07:17, Paul Mackerras wrote:
On Thu, Aug 29, 2013 at 12:56:40AM +0200, Alexander Graf wrote:
On 06.08.2013, at 06:18, Paul Mackerras wrote:
#ifdef CONFIG_PPC_BOOK3S_64
- /* default to book3s_64 (970fx) */
+ /*
+* Default to the same as the host if we're on a
Hi Alex,
Second patch (kvm: ppc: booke: check range page invalidation progress on page
setup) of this patch series fixes a critical issue and we would like that to be
part of 2.12.
First Patch is not that important but pretty simple.
Thanks
-Bharat
-Original Message-
From: Bhushan
19 matches
Mail list logo