> This is the really low level of guest entry/exit code.
>
> Book3s_64 has an SLB, which stores all ESID -> VSID mappings we're
> currently aware of.
>
> The segments in the guest differ from the ones on the host, so we need
> to switch the SLB to tell the MMU that we're in a new context.
>
> So
> +static void invalidate_pte(struct hpte_cache *pte)
> +{
> + dprintk_mmu("KVM: Flushing SPT %d: 0x%llx (0x%llx) -> 0x%llx\n",
> + i, pte->pte.eaddr, pte->pte.vpage, pte->host_va);
> +
> + ppc_md.hpte_invalidate(pte->slot, pte->host_va,
> +MMU_P
> >> This is the really low level of guest entry/exit code.
> >>
> >> Book3s_64 has an SLB, which stores all ESID -> VSID mappings we're
> >> currently aware of.
> >>
> >> The segments in the guest differ from the ones on the host, so we
> >> need
> >> to switch the SLB to tell the MMU that we're
> Neither lfs nor stfs touch the fpscr, so remove the restore/save of it
> around them.
Do some 32 bit processors need this?
In 32 bit before the merge, we use to have code that did:
#if defined(CONFIG_4xx) || defined(CONFIG_E500)
#define cvt_fd without save/restore fpscr
#else
#defin
> > Do some 32 bit processors need this?
> >
> > In 32 bit before the merge, we use to have code that did:
> >
> > #if defined(CONFIG_4xx) || defined(CONFIG_E500)
> >#define cvt_fd without save/restore fpscr
> > #else
> >#define cvt_fd with save/restore fpscr
> > #end if
> >
> > K
> >> Neither lfs nor stfs touch the fpscr, so remove the restore/save of =
> it
> >> around them.
> >=20
> > Do some 32 bit processors need this?=20
> >=20
> > In 32 bit before the merge, we use to have code that did:
> >=20
> > #if defined(CONFIG_4xx) || defined(CONFIG_E500)
> > #define cvt_fd
In message <1282699836.22370.566.ca...@pasglop> you wrote:
> On Tue, 2010-08-24 at 15:15 +1000, Michael Neuling wrote:
> > > > Do some 32 bit processors need this?
> > > >
> > > > In 32 bit before the merge, we use to have code that did:
> >
l, please ack.
It's not really my area of expertise, but it applies and compiles for me
and it's relatively simple, so FWIW...
Acked-by: Michael Neuling
>
>
> Alex
>
> > ---
> > v5->v6
> > - switch_booke_debug_regs() not guarded by the compiler swi
ny of the icp VCPU
pointers. This manifests itself later in boot when trying to raise an
IRQ resulting in a null pointer deference/segv.
This moves xics_init() to use dev_base_init() to ensure it happens after
kvm_cpu_init().
Signed-off-by: Michael Neuling
diff --git a/tools/kvm/powerpc/xics.c b/
al/kvm/api.txt if
you're happy with all this.
Signed-off-by: Michael Neuling
diff --git a/arch/powerpc/include/uapi/asm/kvm.h
b/arch/powerpc/include/uapi/asm/kvm.h
index 0fb1a6e..33b8007 100644
--- a/arch/powerpc/include/uapi/asm/kvm.h
+++ b/arch/powerpc/include/uapi/asm/kvm.h
@@ -429
On Sat, Aug 31, 2013 at 8:17 AM, Benjamin Herrenschmidt
wrote:
> On Fri, 2013-08-30 at 16:01 +0200, Alexander Graf wrote:
>> >
>> > - The TM state is offset bu 0x1000. Other than being bigger than
>> the
>> > SPR space, it's fairly arbitrarily chose.
>
> Make it higher, just in case
Ok but
On Sat, Aug 31, 2013 at 12:01 AM, Alexander Graf wrote:
>
> On 30.08.2013, at 08:09, Michael Neuling wrote:
>
>> Alex,
>>
>> This reserves space in get/set_one_reg ioctl for the extra guest state
>> needed for POWER8. It doesn't implement these at all, it ju
The TM state is offset by 0x8000.
- For TM, I've done away with VMX and FP and created a single 64x128 bit
VSX register space.
- I've left a space of 1 (at 0x9c) since Paulus needs to add a value
which applies to POWER7 as well.
Signed-off-by: Michael Neuling
diff --git a/Document
The TM state is offset by 0x8000.
- For TM, I've done away with VMX and FP and created a single 64x128 bit
VSX register space.
- I've left a space of 1 (at 0x9c) since Paulus needs to add a value
which applies to POWER7 as well.
Signed-off-by: Michael Neuling
---
The last on
> At present, PR KVM and BookE KVM does multiple copies of FP and
> related state because of the way that they use the arrays in the
> thread_struct as an intermediate staging post for the state. They do
> this so that they can use the existing system functions for loading
> and saving state, and
Alexander Graf wrote:
>
> On 09.09.2013, at 09:28, Michael Neuling wrote:
>
> >> At present, PR KVM and BookE KVM does multiple copies of FP and
> >> related state because of the way that they use the arrays in the
> >> thread_struct as an intermediate
there is an active
transaction being started.
This patch is on top of Paulus' recent KVM TM patch set.
Signed-off-by: Michael Neuling
---
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 8
1 file changed, 8 insertions(+)
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
b/arch/powerp
This branch label is over a large section so let's give it a real name.
Signed-off-by: Michael Neuling
---
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
b/arch/powerp
This patch series implements split core mode on POWER8. This enables up to 4
subcores per core which can each independently run guests (per guest SPRs like
SDR1, LPIDR etc are replicated per subcore). Lots more documentation on this
feature in the code and commit messages.
Most of this code is i
identical mechanism to block split core, rework the
secondary inhibit code to be a "HV KVM is active" check. We can then use
that in both the cpu hotplug code and the upcoming split core code.
Signed-off-by: Michael Ellerman
Signed-off-by: Michael Neuling
---
arch/powerpc/include/asm/kvm_pp
given the current split
core mode.
Although threads_per_subcore can change during the life of the system,
the commit that enables that will ensure that threads_per_subcore does
not change during the life of a KVM VM.
Signed-off-by: Michael Ellerman
Signed-off-by: Michael Neuling
---
arch
a guest.
Unlike threads_per_core which is fixed at boot, threads_per_subcore can
change while the system is running. Most code will not want to use
threads_per_subcore.
Signed-off-by: Michael Ellerman
Signed-off-by: Michael Neuling
---
arch/powerpc/include/asm/cputhreads.h | 7 +++
arch
to
online cpus which are not the primary thread within their *sub* core.
On POWER7 and other systems that do not support split core,
threads_per_subcore == threads_per_core and so the check is equivalent.
Signed-off-by: Michael Ellerman
Signed-off-by: Michael Neuling
---
arch/powerpc/kernel
mpe.
Signed-off-by: Michael Ellerman
Signed-off-by: Michael Neuling
Signed-off-by: Srivatsa S. Bhat
Signed-off-by: Mahesh Salgaonkar
Signed-off-by: Benjamin Herrenschmidt
---
arch/powerpc/include/asm/reg.h | 9 +
arch/powerpc/platforms/powernv/Makefile | 2 +-
arc
deal
with the interrupt later.
Signed-off-by: Michael Ellerman
Signed-off-by: Michael Neuling
---
arch/powerpc/include/asm/processor.h | 2 +-
arch/powerpc/kernel/idle_power7.S| 9 +
arch/powerpc/platforms/powernv/smp.c | 2 +-
3 files changed, 11 insertions(+), 2 deletions(-)
diff
Joel Stanley wrote:
> Hi Mikey,
>
> On Thu, Apr 24, 2014 at 11:02 AM, Michael Neuling wrote:
> > +static DEVICE_ATTR(subcores_per_core, 0600,
> > + show_subcores_per_core, store_subcores_per_core);
>
> Can we make this 644, so users can query the st
> This patch series implements split core mode on POWER8. This enables up to 4
> subcores per core which can each independently run guests (per guest SPRs like
> SDR1, LPIDR etc are replicated per subcore). Lots more documentation on this
> feature in the code and commit messages.
>
> Most of th
> In parallel to the Processor ID Register (PIR) threaded POWER8 also adds a
> Thread ID Register (TID). Since PR KVM doesn't emulate more than one thread
s/TID/TIR/ above
> per core, we can just always expose 0 here.
I'm not sure if we ever do, but if we IPI ourselves using a doorbell,
we'll ne
This patch series implements split core mode on POWER8. This enables up to 4
subcores per core which can each independently run guests (per guest SPRs like
SDR1, LPIDR etc are replicated per subcore). Lots more documentation on this
feature in the code and commit messages.
Most of this code is i
identical mechanism to block split core, rework the
secondary inhibit code to be a "HV KVM is active" check. We can then use
that in both the cpu hotplug code and the upcoming split core code.
Signed-off-by: Michael Ellerman
Signed-off-by: Michael Neuling
Acked-by: Alexander Graf
Acke
a guest.
Unlike threads_per_core which is fixed at boot, threads_per_subcore can
change while the system is running. Most code will not want to use
threads_per_subcore.
Signed-off-by: Michael Ellerman
Signed-off-by: Michael Neuling
---
arch/powerpc/include/asm/cputhreads.h | 7 +++
arch
to
online cpus which are not the primary thread within their *sub* core.
On POWER7 and other systems that do not support split core,
threads_per_subcore == threads_per_core and so the check is equivalent.
Signed-off-by: Michael Ellerman
Signed-off-by: Michael Neuling
---
arch/powerpc/kernel
deal
with the interrupt later.
Signed-off-by: Michael Ellerman
Signed-off-by: Michael Neuling
---
arch/powerpc/include/asm/processor.h | 2 +-
arch/powerpc/kernel/idle_power7.S| 9 +
arch/powerpc/platforms/powernv/smp.c | 2 +-
3 files changed, 11 insertions(+), 2 deletions(-)
diff
given the current split
core mode.
Although threads_per_subcore can change during the life of the system,
the commit that enables that will ensure that threads_per_subcore does
not change during the life of a KVM VM.
Signed-off-by: Michael Ellerman
Signed-off-by: Michael Neuling
Acked-by
by paulus. The rest by mikey and mpe.
Signed-off-by: Michael Ellerman
Signed-off-by: Michael Neuling
Signed-off-by: Srivatsa S. Bhat
Signed-off-by: Mahesh Salgaonkar
Signed-off-by: Benjamin Herrenschmidt
---
arch/powerpc/include/asm/reg.h | 9 +
arch/powerpc/platforms/powernv
On Fri, 2014-05-23 at 11:53 +0200, Alexander Graf wrote:
> On 23.05.14 10:15, Michael Neuling wrote:
> > This patch series implements split core mode on POWER8. This enables up to
> > 4
> > subcores per core which can each independently run guests (per guest SPRs
> &g
> >> Also, is there any performance penalty associated with split core mode?
> >> If not, could we just always default to split-by-4 on POWER8 bare metal?
> > Yeah, there is a performance hit . When you are split (ie
> > subcores_per_core = 2 or 4), the core is stuck in SMT8 mode. So if you
> > o
Alex,
> >> If it's the latter, we could just have ppc64_cpu --smt=x also set the
> >> subcore amount in parallel to the thread count.
> > FWIW on powernv we just nap the threads on hotplug.
> >
> >> The reason I'm bringing this up is that I'm not quite sure who would be
> >> the instance doing the
Alex,
> > +static int kvmppc_h_set_mode(struct kvm_vcpu *vcpu, unsigned long mflags,
> > +unsigned long resource, unsigned long value1,
> > +unsigned long value2)
> > +{
> > + switch (resource) {
> > + case H_SET_MODE_RESOURCE_SET_CIABR:
> > +
exist on POWER8.
Signed-off-by: Michael Neuling
Signed-off-by: Paul Mackerras
---
v2:
add some #defines to make CIABR setting clearer. No functional change.
diff --git a/arch/powerpc/include/asm/hvcall.h
b/arch/powerpc/include/asm/hvcall.h
index 5dbbb29..85bc8c0 100644
--- a/arch/powerpc
in the KVM code, use these defines when we call
h_set_mode. No functional change.
Signed-off-by: Michael Neuling
--
This depends on the KVM h_set_mode patches.
diff --git a/arch/powerpc/include/asm/plpar_wrappers.h
b/arch/powerpc/include/asm/plpar_wrappers.h
index 12c32c5..67859ed 100644
--- a/
On Fri, 2014-05-30 at 18:56 +1000, Michael Ellerman wrote:
> On Thu, 2014-05-29 at 17:45 +1000, Michael Neuling wrote:
> > > > +/* Values for 2nd argument to H_SET_MODE */
> > > > +#define H_SET_MODE_RESOURCE_SET_CIABR1
> > > > +#defin
On Mon, 2014-06-23 at 12:14 +1000, Gavin Shan wrote:
> The patch implements one OPAL firmware sysfs file to support PCI error
> injection: "/sys/firmware/opal/errinjct", which will be used like the
> way described as follows.
>
> According to PAPR spec, there are 3 RTAS calls related to error inje
Add 'r' to register name r2 in kvmppc_hv_enter.
Also update comment at the top of kvmppc_hv_enter to indicate that R2/TOC is
non-volatile.
Signed-off-by: Michael Neuling
Signed-off-by: Paul Mackerras
---
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 3 ++-
1 file changed, 2 insert
This cleans up kvmppc_load/save_fp. It removes unnecessary isyncs. It also
removes the unnecessary resetting of the MSR bits on exit of kvmppc_save_fp.
Signed-off-by: Michael Neuling
Signed-off-by: Paul Mackerras
---
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 2 --
1 file changed, 2 deletions
On Tue, 2014-08-19 at 15:24 +1000, Paul Mackerras wrote:
> On Tue, Aug 19, 2014 at 02:59:29PM +1000, Michael Neuling wrote:
> > This cleans up kvmppc_load/save_fp. It removes unnecessary isyncs.
>
> NAK - they are necessary on PPC970, which we (still) support. You
> could put
In message <4ebd46f4.5040...@suse.de> you wrote:
> On 11/11/2011 03:03 AM, Michael Neuling wrote:
> > Currently kvmppc_start_thread() tries to wake other SMT threads via
> > xics_wake_cpu(). Unfortunately xics_wake_cpu only exists when
> > CONFIG_SMP=Y so when compili
Alexander Graf wrote:
> After merging the register type check patches from Ben's tree, the
> hv enabled booke implementation ceased to compile.
>
> This patch fixes things up so everyone's happy again.
Is there a defconfig which catches this?
Mikey
>
> Signed-off-by: Alexander Graf
> ---
>
48 matches
Mail list logo