abled, then the
virtual-mode handlers assume that they are being called only to finish
up the operation. Therefore we turn off the real-mode flag in the XICS
code when running as a nested hypervisor.
Reviewed-by: David Gibson
Signed-off-by: Paul Mackerras
---
arch/powerpc/include/asm/asm-prototy
This adds code to call the H_IPI and H_EOI hypercalls when we are
running as a nested hypervisor (i.e. without the CPU_FTR_HVMODE cpu
feature) and we would otherwise access the XICS interrupt controller
directly or via an OPAL call.
Signed-off-by: Paul Mackerras
---
arch/powerpc/kvm/book3s_hv.c
different endianness, the version number
check will fail and the hcall will be rejected.
Nested hypervisors do not support indep_threads_mode=N, so this adds
code to print a warning message if the administrator has set
indep_threads_mode=N, and treat it as Y.
Signed-off-by: Paul Mackerras
---
arch
later) processor.
Signed-off-by: Paul Mackerras
---
arch/powerpc/include/asm/hvcall.h | 5 +
arch/powerpc/include/asm/kvm_book3s.h | 10 +-
arch/powerpc/include/asm/kvm_book3s_64.h | 33
arch/powerpc/include/asm/kvm_book3s_asm.h | 3 +
arch/powerpc/include/asm/kvm_host.h
kvmppc_unmap_pte() does a sequence of operations that are open-coded in
kvm_unmap_radix(). This extends kvmppc_unmap_pte() a little so that it
can be used by kvm_unmap_radix(), and makes kvm_unmap_radix() call it.
Reviewed-by: David Gibson
Signed-off-by: Paul Mackerras
---
arch/powerpc/kvm
any pgtable, not specific to the one for this guest.
[pau...@ozlabs.org - reduced diffs from previous code]
Reviewed-by: David Gibson
Signed-off-by: Suraj Jitindar Singh
Signed-off-by: Paul Mackerras
---
arch/powerpc/kvm/book3s_64_mmu_radix.c | 210 +++--
1 file
as a nested hypervisor the
real hypervisor could use this to determine when it can free resources.
Reviewed-by: David Gibson
Signed-off-by: Suraj Jitindar Singh
Signed-off-by: Paul Mackerras
---
arch/powerpc/kvm/book3s_hv.c | 8 +++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git
through the
process tables or a guest real address through the partition tables.
[pau...@ozlabs.org - reduced diffs from previous code]
Reviewed-by: David Gibson
Signed-off-by: Suraj Jitindar Singh
Signed-off-by: Paul Mackerras
---
arch/powerpc/include/asm/kvm_book3s.h | 3 +
arch/powerpc/kvm
its. This changes the code to use the regs.ccr field
instead of cr, and changes the assembly code on 64-bit platforms to
use 64-bit loads and stores instead of 32-bit ones.
Reviewed-by: David Gibson
Signed-off-by: Paul Mackerras
---
arch/powerpc/include/asm/kvm_book3s.h| 4 ++--
arch/powerpc/i
in hypervisor mode,
along with the CPU_FTR_HVMODE bit.
Doing this will not change anything at this stage because the only
code that tests CPU_FTR_P9_TM_HV_ASSIST is in HV KVM, which currently
can only be used when when CPU_FTR_HVMODE is set.
Reviewed-by: David Gibson
Signed-off-by: Paul
From: Suraj Jitindar Singh
Add definition of the LPCR EVIRT (enhanced virtualisation) bit.
Reviewed-by: David Gibson
Signed-off-by: Suraj Jitindar Singh
Signed-off-by: Paul Mackerras
---
arch/powerpc/include/asm/reg.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/powerpc/include
ries for a HPT guest.
Reviewed-by: David Gibson
Signed-off-by: Paul Mackerras
---
arch/powerpc/include/asm/kvm_book3s_64.h | 1 +
arch/powerpc/include/asm/kvm_host.h | 1 +
arch/powerpc/kvm/book3s_64_mmu_radix.c | 179 +++
arch/powerpc/kvm/book3s_hv.c
Reviewed-by: David Gibson
Signed-off-by: Paul Mackerras
---
arch/powerpc/include/asm/reg.h | 1 +
arch/powerpc/kvm/book3s_hv.c | 5 -
2 files changed, 5 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
index e5b314e..6fda746 100644
Signed-off-by: Paul Mackerras
---
arch/powerpc/include/asm/asm-prototypes.h | 2 +
arch/powerpc/include/asm/kvm_ppc.h| 2 +
arch/powerpc/kvm/book3s_hv.c | 425 +-
arch/powerpc/kvm/book3s_hv_ras.c | 2 +
arch/powerpc/kvm
ings for following patches, let's
drop the vcore lock in the for_each_runnable_thread loop, so
kvmppc_handle_exit_hv() gets called without the vcore lock held.
Reviewed-by: David Gibson
Signed-off-by: Paul Mackerras
---
arch/powerpc/kvm/book3s_hv.c | 19 ++-
1 file changed,
the code to be simplified
quite a bit.
_kvmppc_save_tm_pr and _kvmppc_restore_tm_pr become much simpler with
this change, since they now only need to save and restore TAR and pass
1 for the 3rd argument to __kvmppc_{save,restore}_tm.
Signed-off-by: Paul Mackerras
---
arch/powerpc/include/asm/asm
he machine check and HMI real-mode handling is moved before that
label.
Also, the code to handle external interrupts is moved out of line, as
is the code that calls kvmppc_realmode_hmi_handler().
Signed-off-by: Paul Mackerras
---
arch/powerpc/kvm/book3s_hv_ras.c| 8 ++
arch/p
This pulls out the assembler code that is responsible for saving and
restoring the PMU state for the host and guest into separate functions
so they can be used from an alternate entry path. The calling
convention is made compatible with C.
Reviewed-by: David Gibson
Signed-off-by: Paul Mackerras
wed-by: David Gibson
Signed-off-by: Paul Mackerras
---
arch/powerpc/include/asm/kvm_asm.h | 4 +--
arch/powerpc/include/asm/kvm_host.h| 1 +
arch/powerpc/kvm/book3s.c | 43 --
arch/powerpc/kvm/book3s_hv_rm_xics.c |
ing flag instead for this purpose.
Therefore there is no need to do anything with the pending_exceptions
bitmap.
Signed-off-by: Paul Mackerras
---
arch/powerpc/kvm/book3s_xive_template.c | 8
1 file changed, 8 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_xive_template.c
b/arch/powe
This patch series implements nested virtualization in the KVM-HV
module for radix guests on POWER9 systems. Unlike PR KVM, nested
guests are able to run in supervisor mode, meaning that performance is
much better than with PR KVM, and is very close to the performance of
a non-nested guests for mos
. The algorithm
expressed in the C code is almost identical to the previous
algorithm.
Reviewed-by: David Gibson
Signed-off-by: Paul Mackerras
---
arch/powerpc/include/asm/kvm_ppc.h | 1 +
arch/powerpc/kvm/book3s_hv.c| 3 +-
arch/powerpc/kvm/book3s_hv_builtin.c| 48
On Tue, Sep 04, 2018 at 06:16:01PM +1000, Nicholas Piggin wrote:
> THP paths can defer splitting compound pages until after the actual
> remap and TLB flushes to split a huge PMD/PUD. This causes radix
> partition scope page table mappings to get out of synch with the host
> qemu page table mapping
On Mon, Sep 10, 2018 at 08:05:38PM +1000, Michael Neuling wrote:
>
> > > + /* Make sure we aren't patching a freed init section */
> > > + if (in_init_section(patch_addr) && init_freed())
> > > + return 0;
> > > +
> >
> > Do we even need the init_freed() check?
>
> Maybe not. If userspa
On Tue, Sep 04, 2018 at 04:12:07PM -0500, Segher Boessenkool wrote:
> On Mon, Sep 03, 2018 at 08:49:35PM +0530, Sandipan Das wrote:
> > + case 538: /* cnttzw */
> > + if (!cpu_has_feature(CPU_FTR_ARCH_300))
> > + return -1;
> > +
On Mon, Sep 03, 2018 at 01:28:44PM +1000, David Gibson wrote:
> On Fri, Aug 31, 2018 at 04:08:50PM +1000, Alexey Kardashevskiy wrote:
> > At the moment the real mode handler of H_PUT_TCE calls iommu_tce_xchg_rm()
> > which in turn reads the old TCE and if it was a valid entry - marks
> > the physic
ch also moves the local_irq_restore to the point after the pte
pointer returned by find_linux_pte has been dereferenced because that
seems safer, and adds a check to avoid doing the find_linux_pte() call
once mem->pageshift has been reduced to PAGE_SHIFT, as an optimization.
Cc: sta...@vger.kernel.or
This is a repost of a series that I posted back in 2016 but which was
never applied. It aims to make the exception handling code in
__copy_tofrom_user_base clearer and easier to verify, and strengthens
the selftests for the user copy code to test all the paths and to test
the exception handling.
code has been written to be compact rather than as fast as
possible.
Signed-off-by: Paul Mackerras
---
arch/powerpc/lib/copyuser_64.S | 29 +++--
1 file changed, 23 insertions(+), 6 deletions(-)
diff --git a/arch/powerpc/lib/copyuser_64.S b/arch/powerpc/lib/copyuser_64.S
cro per load or store. These loads and stores all use exactly
the same exception handler, which simply resets the argument registers
r3, r4 and r5 to there original values and re-does the whole copy
using the slower loop.
Signed-off-by: Paul Mackerras
---
arch/powerpc/lib/copy
, and that is reflected in failures in these tests.
Based on a test program from Anton Blanchard.
[pau...@ozlabs.org - test all three paths, wrote commit description,
made EX_TABLE create an exception table.]
Signed-off-by: Paul Mackerras
---
.../testing/selftests/powerpc/copyloops/.gitignore
, and makes 2 or 3 versions of each test, each
using a different code path, so as to cover all the possible paths.
Signed-off-by: Paul Mackerras
---
arch/powerpc/lib/copyuser_64.S | 7 +
arch/powerpc/lib/copyuser_power7.S | 21 ++---
arch/powerpc/lib
configured in, so that a full complement of KVM_MAX_VCPUS VCPUs can
be created on POWER9 in all guest SMT modes and emulated hardware
SMT modes.
Signed-off-by: Paul Mackerras
---
This and the next patch apply on my kvm-ppc-next branch, which
includes Sam Bobroff's patch "KVM: PPC: Book3S H
ted.
Hence it is (theoretically) possible for the check in
kvmppc_core_vcpu_create_hv() to race with another userspace thread
changing kvm->arch.emul_smt_mode.
This fixes it by moving the test that uses kvm->arch.emul_smt_mode into
the block where kvm->lock is held.
Signed-off-by: Paul Mackerr
On Wed, Jul 25, 2018 at 04:12:02PM +1000, Sam Bobroff wrote:
> From: Sam Bobroff
>
> It is not currently possible to create the full number of possible
> VCPUs (KVM_MAX_VCPUS) on Power9 with KVM-HV when the guest uses less
> threads per core than it's core stride (or "VSMT mode"). This is
> becau
On Wed, Jul 25, 2018 at 04:12:02PM +1000, Sam Bobroff wrote:
> From: Sam Bobroff
>
> It is not currently possible to create the full number of possible
> VCPUs (KVM_MAX_VCPUS) on Power9 with KVM-HV when the guest uses less
> threads per core than it's core stride (or "VSMT mode"). This is
> becau
On Thu, Jul 19, 2018 at 12:25:10PM +1000, Sam Bobroff wrote:
> From: Sam Bobroff
>
> It is not currently possible to create the full number of possible
> VCPUs (KVM_MAX_VCPUS) on Power9 with KVM-HV when the guest uses less
> threads per core than it's core stride (or "VSMT mode"). This is
> becau
On Thu, Jul 19, 2018 at 04:06:10PM +1000, Michael Ellerman wrote:
> On Tue, 2018-07-17 at 07:19:12 UTC, Alexey Kardashevskiy wrote:
> > The size is always equal to 1 page so let's use this. Later on this will
> > be used for other checks which use page shifts to check the granularity
> > of access.
On Sat, Jul 07, 2018 at 11:07:25AM +0200, Nicholas Mc Guire wrote:
> The constants are 64bit but not explicitly declared UL resulting
> in sparse warnings. Fixed by declaring the constants UL.
>
> Signed-off-by: Nicholas Mc Guire
Thanks, patch applied to my kvm-ppc-next branch.
Paul.
On Sat, Jul 07, 2018 at 08:53:07AM +0200, Nicholas Mc Guire wrote:
> The call to of_find_compatible_node() is returning a pointer with
> incremented refcount so it must be explicitly decremented after the
> last use. As here it is only being used for checking of node presence
> but the result is n
On Mon, May 28, 2018 at 09:48:26AM +0800, wei.guo.si...@gmail.com wrote:
> From: Simon Guo
>
> Originally PPC KVM MMIO emulation uses only 0~31#(5 bits) for VSR
> reg number, and use mmio_vsx_tx_sx_enabled field together for
> 0~63# VSR regs.
>
> Currently PPC KVM MMIO emulation is reimplemented
On Wed, Jun 20, 2018 at 06:42:58PM +1000, Alexey Kardashevskiy wrote:
> When attaching a hardware table to LIOBN in KVM, we match table parameters
> such as page size, table offset and table size. However the tables are
> created via very different paths - VFIO and KVM - and the VFIO path goes
> th
On Tue, Jul 17, 2018 at 05:19:11PM +1000, Alexey Kardashevskiy wrote:
> This is to improve page boundaries checking and should probably
> be cc:stable. I came accross this while debugging nvlink2 passthrough
> but the lack of checking might be exploited by the existing userspace.
>
> The get_user_
On Thu, Jul 12, 2018 at 05:30:26PM +1000, Alexey Kardashevskiy wrote:
> This adds a debugfs entry with mm context id of a process which is using
> KVM. This id is an index in the process table so the userspace can dump
> that tree provided it is granted access to /dev/mem.
Is the main intention he
y: Alexey Kardashevskiy
Acked-by: Paul Mackerras
On Wed, May 23, 2018 at 03:01:47PM +0800, wei.guo.si...@gmail.com wrote:
> From: Simon Guo
>
> It is a simple patch just for moving kvmppc_save_tm/kvmppc_restore_tm()
> functionalities to tm.S. There is no logic change. The reconstruct of
> those APIs will be done in later patches to improve read
On Mon, May 21, 2018 at 12:09:41PM +0800, wei.guo.si...@gmail.com wrote:
> From: Simon Guo
>
> Currently guest kernel doesn't handle TAR fac unavailable and it always
> runs with TAR bit on. PR KVM will lazily enable TAR. TAR is not a
> frequent-use reg and it is not included in SVCPU struct.
>
On Mon, May 21, 2018 at 01:24:24PM +0800, wei.guo.si...@gmail.com wrote:
> From: Simon Guo
>
> This patch reimplements LOAD_VSX/STORE_VSX instruction MMIO emulation with
> analyse_intr() input. It utilizes VSX_FPCONV/VSX_SPLAT/SIGNEXT exported
> by analyse_instr() and handle accordingly.
>
> Whe
On Mon, May 07, 2018 at 02:20:06PM +0800, wei.guo.si...@gmail.com wrote:
> From: Simon Guo
>
> We already have analyse_instr() which analyzes instructions for the
> instruction
> type, size, addtional flags, etc. What kvmppc_emulate_loadstore() did is
> somehow
> duplicated and it will be good
On Mon, May 14, 2018 at 02:04:10PM +1000, Michael Ellerman wrote:
[snip]
> OK good, in commit:
>
> c17b98cf6028 ("KVM: PPC: Book3S HV: Remove code for PPC970 processors") (Dec
> 2014)
>
> So we should be able to do the patch below.
>
> cheers
>
>
> diff --git a/arch/powerpc/include/asm/kvm_ho
On Mon, May 07, 2018 at 02:20:06PM +0800, wei.guo.si...@gmail.com wrote:
> From: Simon Guo
>
> We already have analyse_instr() which analyzes instructions for the
> instruction
> type, size, addtional flags, etc. What kvmppc_emulate_loadstore() did is
> somehow
> duplicated and it will be good
c_handle_load(s)/kvmppc_handle_store()
> accordingly.
>
> For FP store MMIO emulation, the FP regs need to be flushed firstly so
> that the right FP reg vals can be read from vcpu->arch.fpr, which will
> be stored into MMIO data.
>
> Suggested-by: Paul Mackerras
> Si
r() and invokes
> kvmppc_handle_load(s)/kvmppc_handle_store() accordingly.
>
> It also move CACHEOP type handling into the skeleton.
>
> instruction_type within kvm_ppc.h is renamed to avoid conflict with
> sstep.h.
>
> Suggested-by: Paul Mackerras
> Signed-off-by: Simon Guo
48: 7c 80 22 14 add r4,r0,r4
> 24c: 78 83 00 20 clrldi r3,r4,32
> 250: 4e 80 00 20 blr
>
> Fixes: 6ad966d7303b7 ("powerpc/64: Fix checksum folding in csum_add()")
> Signed-off-by: Christophe Leroy
Seems I was right first time... :)
Acked-by: Paul Mackerras
On Wed, May 16, 2018 at 10:11:11AM +0530, Souptick Joarder wrote:
> On Thu, May 10, 2018 at 11:57 PM, Souptick Joarder
> wrote:
> > Use new return type vm_fault_t for fault handler
> > in struct vm_operations_struct. For now, this is
> > just documenting that the function returns a
> > VM_FAULT v
On Wed, Feb 28, 2018 at 01:37:14AM +0800, wei.guo.si...@gmail.com wrote:
> From: Simon Guo
>
> Currently _kvmppc_save/restore_tm() APIs can only be invoked from
> assembly function. This patch adds C function wrappers for them so
> that they can be safely called from C function.
>
> Signed-off-b
On Wed, Feb 28, 2018 at 01:52:37AM +0800, wei.guo.si...@gmail.com wrote:
> From: Simon Guo
>
> In both HV/PR KVM, the KVM_SET_ONE_REG/KVM_GET_ONE_REG ioctl should
> be able to perform without load vcpu. This patch adds
> KVM_SET_ONE_REG/KVM_GET_ONE_REG implementation to async ioctl
> function.
>
On Wed, Feb 28, 2018 at 01:37:07AM +0800, wei.guo.si...@gmail.com wrote:
> From: Simon Guo
>
> In current days, many OS distributions have utilized transaction
> memory functionality. In PowerPC, HV KVM supports TM. But PR KVM
> does not.
>
> The drive for the transaction memory support of PR KV
On Wed, Feb 28, 2018 at 01:52:26AM +0800, wei.guo.si...@gmail.com wrote:
> From: Simon Guo
>
> Currently kernel doesn't use transaction memory.
> And there is an issue for privilege guest that:
> tbegin/tsuspend/tresume/tabort TM instructions can impact MSR TM bits
> without trap into PR host. So
On Wed, Feb 28, 2018 at 01:52:25AM +0800, wei.guo.si...@gmail.com wrote:
> From: Simon Guo
>
> The mfspr/mtspr on TM SPRs(TEXASR/TFIAR/TFHAR) are non-privileged
> instructions and can be executed at PR KVM guest without trapping
> into host in problem state. We only emulate mtspr/mfspr
> texasr/t
On Fri, Apr 06, 2018 at 04:12:32PM +1000, Michael Ellerman wrote:
> Nicholas Piggin writes:
> > diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
> > b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
> > index 78e6a392330f..0221a0f74f07 100644
> > --- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
> > +++ b/arch/power
On Sun, May 06, 2018 at 05:37:27PM +1000, Nicholas Piggin wrote:
> Implement a local TLB flush for invalidating an LPID with variants for
> process or partition scope. And a global TLB flush for invalidating
> a partition scoped page of an LPID.
>
> These will be used by KVM in subsequent patches.
On Wed, Apr 25, 2018 at 07:54:33PM +0800, wei.guo.si...@gmail.com wrote:
> From: Simon Guo
>
> We already have analyse_instr() which analyzes instructions for the
> instruction
> type, size, addtional flags, etc. What kvmppc_emulate_loadstore() did is
> somehow
> duplicated and it will be good
On Wed, Apr 25, 2018 at 07:54:41PM +0800, wei.guo.si...@gmail.com wrote:
> From: Simon Guo
>
> Currently HV will save math regs(FP/VEC/VSX) when trap into host. But
> PR KVM will only save math regs when qemu task switch out of CPU.
>
> To emulate FP/VEC/VSX load, PR KVM need to flush math regs
On Wed, Apr 25, 2018 at 07:54:37PM +0800, wei.guo.si...@gmail.com wrote:
> From: Simon Guo
>
> stwsiwx will place contents of word element 1 of VSR into word
> storage of EA. So the element size of stwsiwx should be 4.
>
> This patch correct the size from 8 to 4.
>
> Signed-off-by: Simon Guo
>
On Wed, Apr 25, 2018 at 07:54:39PM +0800, wei.guo.si...@gmail.com wrote:
> From: Simon Guo
>
> Some VSX instruction like lxvwsx will splat word into VSR. This patch
> adds VSX copy type KVMPPC_VSX_COPY_WORD_LOAD_DUMP to support this.
>
> Signed-off-by: Simon Guo
Reviewed-by: Paul Mackerras
at one
isn't exactly about the type of instruction, but more about the type
of interrupt led to us trying to fetch the instruction.
> Suggested-by: Paul Mackerras
> Signed-off-by: Simon Guo
> ---
> arch/powerpc/include/asm/sstep.h | 2 +-
> arch/powerpc/kvm/emulate_loadstore
On Wed, Apr 25, 2018 at 07:54:36PM +0800, wei.guo.si...@gmail.com wrote:
> From: Simon Guo
>
> When KVM emulates VMX store, it will invoke kvmppc_get_vmx_data() to
> retrieve VMX reg val. kvmppc_get_vmx_data() will check mmio_host_swabbed
> to decide which double word of vr[] to be used. But the
On Wed, Apr 25, 2018 at 07:54:34PM +0800, wei.guo.si...@gmail.com wrote:
> From: Simon Guo
>
> Current regs are scattered at kvm_vcpu_arch structure and it will
> be more neat to organize them into pt_regs structure.
>
> Also it will enable reconstruct MMIO emulation code with
"reimplement" wou
On Wed, Apr 25, 2018 at 07:54:35PM +0800, wei.guo.si...@gmail.com wrote:
> From: Simon Guo
>
> This patch moves nip/ctr/lr/xer registers from scattered places in
> kvm_vcpu_arch to pt_regs structure.
>
> cr register is "unsigned long" in pt_regs and u32 in vcpu->arch.
> It will need more conside
c_handle_load(s)/kvmppc_handle_store()
> accordingly.
>
> The FP regs need to be flushed so that the right FP reg vals can be read
> from vcpu->arch.fpr.
This only applies for store instructions; it would be clearer if you
said that explicitly.
>
> Suggested-by: Paul Mackerra
dle accordingly.
>
> When emulating VSX store, the VSX reg will need to be flushed so that
> the right reg val can be retrieved before writing to IO MEM.
>
> Suggested-by: Paul Mackerras
> Signed-off-by: Simon Guo
Looks good, except that you shouldn't need the special case for
st
reg val can be retrieved before writing to
> IO MEM.
>
> Suggested-by: Paul Mackerras
> Signed-off-by: Simon Guo
This looks fine for lvx and stvx, but now we are also doing something
for the vector element loads and stores (lvebx, stvebx, lvehx, stvehx,
etc.) without having the log
On Wed, Apr 25, 2018 at 07:54:38PM +0800, wei.guo.si...@gmail.com wrote:
> From: Simon Guo
>
> To optimize kvm emulation code with analyse_instr, adds new
> mmio_update_ra flag to aid with GPR RA update.
>
> This patch arms RA update at load/store emulation path for both
> qemu mmio emulation or
On Tue, Apr 17, 2018 at 09:56:24AM +0200, Christophe Leroy wrote:
> add_reloc_offset() is almost redundant with reloc_offset()
>
> Signed-off-by: Christophe Leroy
> ---
> arch/powerpc/include/asm/setup.h | 3 +--
> arch/powerpc/kernel/misc.S | 16
> arch/power
On Tue, Mar 27, 2018 at 05:22:32PM +0200, LEROY Christophe wrote:
> Shile Zhang a écrit :
>
> >fix the missed point in Paul's patch:
> >"powerpc/64: Fix checksum folding in csum_tcpudp_nofold and
> >ip_fast_csum_nofold"
> >
> >Signed-off-by: Shile Zhang
> >---
> > arch/powerpc/include/asm/checks
to restricted __be64
Signed-off-by: Gavin Shan
Signed-off-by: Paul Mackerras
---
diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c
b/arch/powerpc/platforms/powernv/pci-ioda.c
index a6c92c7..71de087 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda.c
+++ b/arch/powerpc/platforms/powernv/p
recording, which will have already been done before we get into fake
suspend state). Therefore these changes are not made subject to a CPU
feature bit.
Signed-off-by: Paul Mackerras
---
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 17 ++---
1 file changed, 10 insertions(+), 7 deletions(-)
This patch adds the code to do that, conditional
on the CPU_FTR_P9_TM_XER_SO_BUG feature bit.
Signed-off-by: Suraj Jitindar Singh
Signed-off-by: Paul Mackerras
---
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 24
1 file changed, 20 insertions(+), 4 deletions(-)
diff --git a/ar
the guest
treclaim instruction that had done failure recording, not the treclaim
done in hypervisor state in the guest exit path.
With this, the KVM_CAP_PPC_HTM capability returns true (1) even if
transactional memory is not available to host userspace.
Signed-off-by: Paul Mackerras
---
arch
n SMT4 after pnv_power9_force_smt4_catch() function returns,
until the pnv_power9_force_smt4_release() function is called.
It undoes the effect of step 1 above and allows the other threads
to go into a stop state.
Signed-off-by: Paul Mackerras
---
arch/powerpc/include/asm/asm-prototypes.h | 3 ++
arch/p
bled using a "tm-suspend-hypervisor-assist" node in the device
tree, and a "tm-suspend-xer-so-bug" node enables the workarounds for
the XER[SO] bug. In the absence of such nodes, a quirk enables both
for POWER9 "Nimbus" DD2.2 processors.
Signed-off-by: Paul
This patch series applies on top of my patch series "powerpc: Free up
CPU feature bits".
POWER9 has some shortcomings in its implementation of transactional
memory. Starting with v2.2 of the "Nimbus" chip, some changes have
been made to the hardware which make it able to generate hypervisor
inter
On Wed, Mar 21, 2018 at 09:24:56PM +1100, Paul Mackerras wrote:
> This patch series applies on top of my patch series "powerpc: Free up
> CPU feature bits".
>
> POWER9 has some shortcomings in its implementation of transactional
> memory. Starting with v2.2 of the &q
On Sun, Mar 18, 2018 at 04:35:56PM +0530, Aneesh Kumar K.V wrote:
[snip]
> +static inline int get_ea_context(mm_context_t *ctx, unsigned long ea)
> +{
> + int index = ea >> MAX_EA_BITS_PER_CONTEXT;
> +
> + if (likely(index < ARRAY_SIZE(ctx->extended_id)))
> + return ctx->extend
recording, which will have already been done before we get into fake
suspend state). Therefore these changes are not made subject to a CPU
feature bit.
Signed-off-by: Paul Mackerras
---
arch/powerpc/kernel/idle_book3s.S | 4 ++--
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 17 ++--
This patch adds the code to do that, conditional
on the CPU_FTR_P9_TM_XER_SO_BUG feature bit.
Signed-off-by: Suraj Jitindar Singh
Signed-off-by: Paul Mackerras
---
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 24
1 file changed, 20 insertions(+), 4 deletions(-)
diff --git a/ar
the guest
treclaim instruction that had done failure recording, not the treclaim
done in hypervisor state in the guest exit path.
With this, the KVM_CAP_PPC_HTM capability returns true (1) even if
transactional memory is not available to host userspace.
Signed-off-by: Paul Mackerras
---
arch
n SMT4 after pnv_power9_force_smt4_catch() function returns,
until the pnv_power9_force_smt4_release() function is called.
It undoes the effect of step 1 above and allows the other threads
to go into a stop state.
Signed-off-by: Paul Mackerras
---
arch/powerpc/include/asm/asm-prototypes.h | 3 ++
arch/p
bled using a "tm-suspend-hypervisor-assist" node in the device
tree, and a "tm-suspend-xer-so-bug" node enables the workarounds for
the XER[SO] bug. In the absence of such nodes, a quirk enables both
for POWER9 "Nimbus" DD2.2 processors.
Signed-off-by: Paul
This patch series applies on top of my patch series "powerpc: Free up
CPU feature bits".
POWER9 has some shortcomings in its implementation of transactional
memory. Starting with v2.2 of the "Nimbus" chip, some changes have
been made to the hardware which make it able to generate hypervisor
inter
feature bits on 64-bit machines.
Signed-off-by: Paul Mackerras
---
arch/powerpc/include/asm/cputable.h | 133 --
arch/powerpc/kernel/cpu_setup_6xx.S | 2 +-
arch/powerpc/kernel/cpu_setup_fsl_booke.S | 2 +-
3 files changed, 73 insertions(+), 64
e to use a bit to indicate the unusual
situation rather than the common situation. This therefore defines
a CPU_FTR_USE_RTC bit in place of the CPU_FTR_USE_TB bit, and
arranges for it to be set on PPC601 systems.
Signed-off-by: Paul Mackerras
---
arch/powerpc/include/asm/cputa
The CPU_FTR_L2CSR bit is never tested anywhere, so let's reclaim the bit.
Signed-off-by: Paul Mackerras
---
arch/powerpc/include/asm/cputable.h | 7 +++
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/include/asm/cputable.h
b/arch/powerpc/include/asm/cputa
This patch series is against the powerpc next branch. It takes
advantage of the fact that there are only a few CPU feature bits that
are meaningful on both 32-bit and 64-bit platforms. At the moment,
many of the 64 bits of the CPU feature mask on 64-bit platforms are
taken up with bits which are
lue in
non-transactional state (e.g. after a treclaim), and treclaim will
work correctly.
Signed-off-by: Paul Mackerras
---
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 17 ++---
1 file changed, 10 insertions(+), 7 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
b/arch/p
POWER9 has some shortcomings in its implementation of transactional
memory. Starting with v2.2 of the "Nimbus" chip, some changes have
been made to the hardware which make it able to generate hypervisor
interrupts in the situations where hardware needs the hypervisor to
provide some assistance wit
the guest
treclaim instruction that had done failure recording, not the treclaim
done in hypervisor state in the guest exit path.
With this, the KVM_CAP_PPC_HTM capability returns true (1) even if
transactional memory is not available to host userspace.
Signed-off-by: Paul Mackerras
---
arch
subsystem is in use, the software assistance
can be enabled using a "tm-suspend-hypervisor-assist" node in the
device tree. In the absence of such a node, a quirk enables the
assistance for POWER9 "Nimbus" DD2.2 processors.
Signed-off-by: Paul Mackerras
---
arch/power
smt4_catch() function returns,
until the pnv_power9_force_smt4_release() function is called.
It undoes the effect of step 1 above and allows the other threads
to go into a stop state.
Signed-off-by: Paul Mackerras
---
arch/powerpc/include/asm/asm-prototypes.h | 3 ++
arch/powerpc/include/asm/paca.h |
401 - 500 of 2298 matches
Mail list logo