Hi,
On 2023-03-16 14:39:08, Michael Ellerman wrote:
> Kautuk Consul writes:
> > On 2023-03-15 15:48:53, Michael Ellerman wrote:
> >> Kautuk Consul writes:
> >> > kvmppc_hv_entry is called from only 2 locations within
> >> > book3s_hv_rm
kvmppc_hv_entry isn't called from anywhere other than
book3s_hv_rmhandlers.S itself. Removing .global scope for
this function and annotating it with SYM_INNER_LABEL.
Signed-off-by: Kautuk Consul
---
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 3 +--
1 file changed, 1 insertion(+), 2 deletions
kvmppc_hv_entry is called from only 2 locations within
book3s_hv_rmhandlers.S. Both of those locations set r4
as HSTATE_KVM_VCPU(r13) before calling kvmppc_hv_entry.
So, shift the r4 load instruction to kvmppc_hv_entry and
thus modify the calling convention of this function.
Signed-off-by: Kautuk
- remove .global scope of kvmppc_hv_entry
- remove r4 argument to kvmppc_hv_entry as it is not required
Changes since v2:
- Add the lwsync instruction before the load to r4 to order
load of vcore before load of vcpu
Kautuk Consul (2):
arch/powerpc/kvm: kvmppc_hv_entry: remove .global scope
Hi,
On 2023-03-22 23:17:35, Michael Ellerman wrote:
> Kautuk Consul writes:
> > kvmppc_hv_entry is called from only 2 locations within
> > book3s_hv_rmhandlers.S. Both of those locations set r4
> > as HSTATE_KVM_VCPU(r13) before calling kvmppc_hv_entry.
> > So, sh
kvmppc_vcore_create() might not be able to allocate memory through
kzalloc. In that case the kvm->arch.online_vcores shouldn't be
incremented.
Add a check for kzalloc failure and return with -ENOMEM from
kvmppc_core_vcpu_create_hv().
Signed-off-by: Kautuk Consul
---
arch/powerpc/kvm/book3s_h
Hi,
On 2023-03-21 14:15:14, Nicholas Piggin wrote:
> On Thu Mar 16, 2023 at 3:10 PM AEST, Kautuk Consul wrote:
> > kvmppc_hv_entry is called from only 2 locations within
> > book3s_hv_rmhandlers.S. Both of those locations set r4
> > as HSTATE_KVM_VCPU(r13) before calling kv
Hi everyone,
Anyone interested in reviewing this small patch-set ?
I tested it on P8 and it works fine.
Thanks.
On 2023-03-06 07:37:38, Kautuk Consul wrote:
> - remove .global scope of kvmppc_hv_entry
> - remove r4 argument to kvmppc_hv_entry as it is not required
>
> Chan
On 2023-03-21 10:24:36, Kautuk Consul wrote:
> > Is r4 there only used for CONFIG_KVM_BOOK3S_HV_P8_TIMING? Could put it
> > under there. Although you then lose the barrier if it's disabled, that
> > is okay if you're sure that's the only memory operation being ordered.
On 2023-03-15 15:48:53, Michael Ellerman wrote:
> Kautuk Consul writes:
> > kvmppc_hv_entry is called from only 2 locations within
> > book3s_hv_rmhandlers.S. Both of those locations set r4
> > as HSTATE_KVM_VCPU(r13) before calling kvmppc_hv_entry.
> > So, sh
On 2023-03-15 10:48:01, Kautuk Consul wrote:
> On 2023-03-15 15:48:53, Michael Ellerman wrote:
> > Kautuk Consul writes:
> > > kvmppc_hv_entry is called from only 2 locations within
> > > book3s_hv_rmhandlers.S. Both of those locations set r4
> > > as
> On Wed, Feb 22, 2023 at 07:02:34AM +, Christophe Leroy wrote:
> > > +/* Redefine rmb() to lwsync. */
> >
> > WHat's the added value of this comment ? Isn't it obvious in the line
> > below that rmb() is being defined to lwsync ? Please avoid useless comments.
> Sure.
Sorry, forgot to add
> No, I don't mean to use the existing #ifdef/elif/else.
>
> Define an #ifdef /#else dedicated to xmb macros.
>
> Something like that:
>
> @@ -35,9 +35,15 @@
>* However, on CPUs that don't support lwsync, lwsync actually maps to a
>* heavy-weight sync, so smp_wmb() can be a
Again, could some IBM/non-IBM employees do basic sanity kernel load
testing on PPC64 UP and SMP systems for this patch?
would deeply appreciate it! :-)
Thanks again!
On Wed, Feb 22, 2023 at 07:02:34AM +, Christophe Leroy wrote:
>
>
> Le 22/02/2023 à 07:01, Kautuk Consul a écrit :
> > A link from ibm.com states:
> > "Ensures that all instructions preceding the call to __lwsync
> > complete before any subsequent st
On Wed, Feb 22, 2023 at 08:28:19AM +, Christophe Leroy wrote:
>
>
> Le 22/02/2023 à 09:21, Kautuk Consul a écrit :
> >> On Wed, Feb 22, 2023 at 07:02:34AM +, Christophe Leroy wrote:
> >>>> +/* Redefine rmb() to lwsync. */
> >>>
> &g
Sorry, sent the wrong patch!
Please ignore this one.
Sending the v2 in another email.
On Wed, Feb 22, 2023 at 02:31:12PM +0530, Kautuk Consul wrote:
> A link from ibm.com states:
> "Ensures that all instructions preceding the call to __lwsync
> complete before any subsequent stor
On Wed, Feb 22, 2023 at 09:44:54AM +, Christophe Leroy wrote:
>
>
> Le 22/02/2023 à 10:30, Kautuk Consul a écrit :
> > Again, could some IBM/non-IBM employees do basic sanity kernel load
> > testing on PPC64 UP and SMP systems for this patch?
> > would deeply appr
>
> Reviewed-by: Christophe Leroy
Thanks!
>
> > ---
> > arch/powerpc/include/asm/barrier.h | 7 +++
> > 1 file changed, 7 insertions(+)
> >
> > diff --git a/arch/powerpc/include/asm/barrier.h
> > b/arch/powerpc/include/asm/barrier.h
> > index b95b666f0374..e088dacc0ee8 100644
> > ---
re defined to lwsync.
But this same understanding applies to parallel pipeline
execution on each PowerPC processor.
So, use the lwsync instruction for rmb() and wmb() on the PPC
architectures that support it.
Also removed some useless spaces.
Signed-off-by: Kautuk Consul
---
arch/powerpc/i
> >> I'd have preferred with 'asm volatile' though.
> > Sorry about that! That wasn't the intent of this patch.
> > Probably another patch series should change this manner of #defining
> > assembly.
>
> Why adding new line wrong then have to have another patch to make them
> right ?
>
> When
re defined to lwsync.
But this same understanding applies to parallel pipeline
execution on each PowerPC processor.
So, use the lwsync instruction for rmb() and wmb() on the PPC
architectures that support it.
Signed-off-by: Kautuk Consul
---
arch/powerpc/include/asm/barrier.h | 7 +++
1 file
Hi Sathvika,
(Sorry didn't include list in earlier email.)
On Mon, Feb 20, 2023 at 12:35:09PM +0530, Sathvika Vasireddy wrote:
> Hi Kautuk,
>
> On 20/02/23 10:53, Kautuk Consul wrote:
> > kvmppc_hv_entry isn't called from anywhere other than
> > book3s_hv_rmhandlers.S its
On Mon, Feb 20, 2023 at 01:31:40PM +0530, Sathvika Vasireddy wrote:
> Placing SYM_FUNC_END(kvmppc_hv_entry) before kvmppc_got_guest() should do:
>
> @@ -502,12 +500,10 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
> * *
>
On Mon, Feb 20, 2023 at 01:41:38PM +0530, Kautuk Consul wrote:
> On Mon, Feb 20, 2023 at 01:31:40PM +0530, Sathvika Vasireddy wrote:
> > Placing SYM_FUNC_END(kvmppc_hv_entry) before kvmppc_got_guest() should do:
> >
> > @@ -502,12 +500,10 @@ END_FTR_SECTION_IF
- remove .global scope of kvmppc_hv_entry
- remove r4 argument to kvmppc_hv_entry as it is not required
Kautuk Consul (2):
arch/powerpc/kvm: kvmppc_hv_entry: remove .global scope
arch/powerpc/kvm: kvmppc_hv_entry: remove r4 argument
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 10 --
1
kvmppc_hv_entry isn't called from anywhere other than
book3s_hv_rmhandlers.S itself. Removing .global scope for
this function.
Signed-off-by: Kautuk Consul
---
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 1 -
1 file changed, 1 deletion(-)
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
b
kvmppc_hv_entry is called from only 2 locations within
book3s_hv_rmhandlers.S. Both of those locations set r4
as HSTATE_KVM_VCPU(r13) before calling kvmppc_hv_entry.
So, shift the r4 load instruction to kvmppc_hv_entry and
thus modify the calling convention of this function.
Signed-off-by: Kautuk
On 2023-02-24 16:45:45, Sathvika Vasireddy wrote:
> On 23/02/23 10:39, Kautuk Consul wrote:
>
> > Hi Sathvika,
> > > Just one question though. Went through the code again and I think
> > > that this place shouldn't be proper to insert a SYM_FUNC_END
> > &g
kvmppc_hv_entry isn't called from anywhere other than
book3s_hv_rmhandlers.S itself. Removing .global scope for
this function and annotating it with SYM_INNER_LABEL.
Signed-off-by: Kautuk Consul
---
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 3 +--
1 file changed, 1 insertion(+), 2 deletions
Hi,
On 2023-02-20 10:53:55, Kautuk Consul wrote:
> kvmppc_hv_entry is called from only 2 locations within
> book3s_hv_rmhandlers.S. Both of those locations set r4
> as HSTATE_KVM_VCPU(r13) before calling kvmppc_hv_entry.
> So, shift the r4 load instruction to kvmppc_hv_entry and
&
kvmppc_hv_entry is called from only 2 locations within
book3s_hv_rmhandlers.S. Both of those locations set r4
as HSTATE_KVM_VCPU(r13) before calling kvmppc_hv_entry.
So, shift the r4 load instruction to kvmppc_hv_entry and
thus modify the calling convention of this function.
Signed-off-by: Kautuk
- remove .global scope of kvmppc_hv_entry
- remove r4 argument to kvmppc_hv_entry as it is not required
Changes since v1:
- replaced .global by SYM_INNER_LABEL for kvmpcc_hv_entry
Kautuk Consul (2):
arch/powerpc/kvm: kvmppc_hv_entry: remove .global scope
arch/powerpc/kvm: kvmppc_hv_entry
On 2023-02-22 09:47:19, Paul E. McKenney wrote:
> On Wed, Feb 22, 2023 at 02:33:44PM +0530, Kautuk Consul wrote:
> > A link from ibm.com states:
> > "Ensures that all instructions preceding the call to __lwsync
> > complete before any subsequent store in
On 2023-02-22 20:16:10, Paul E. McKenney wrote:
> On Thu, Feb 23, 2023 at 09:31:48AM +0530, Kautuk Consul wrote:
> > On 2023-02-22 09:47:19, Paul E. McKenney wrote:
> > > On Wed, Feb 22, 2023 at 02:33:44PM +0530, Kautuk Consul wrote:
> > > > A link from ibm.com sta
On 2023-02-23 14:51:25, Michael Ellerman wrote:
> Hi Paul,
>
> "Paul E. McKenney" writes:
> > On Wed, Feb 22, 2023 at 02:33:44PM +0530, Kautuk Consul wrote:
> >> A link from ibm.com states:
> >> "Ensures that all instructions preceding the call
> You are correct, the patch is wrong because it fails to account for IO
> accesses.
Okay, I looked at the PowerPC ISA and found:
"The memory barrier provides an ordering function for the storage accesses
caused by Load, Store,and dcbz instructions that are executed by the processor
executing
Hi Sathvika,
>
> Just one question though. Went through the code again and I think
> that this place shouldn't be proper to insert a SYM_FUNC_END
> because we haven't entered the guest at this point and the name
> of the function is kvmppc_hv_entry which I think implies that
> this SYM_FUNC_END
re defined to lwsync.
But this same understanding applies to parallel pipeline
execution on each PowerPC processor.
So, use the lwsync instruction for rmb() and wmb() on the PPC
architectures that support it.
Also removed some useless spaces.
Signed-off-by: Kautuk Consul
---
arch/powerpc/i
Hi All,
On Wed, Feb 22, 2023 at 11:31:07AM +0530, Kautuk Consul wrote:
> /* The sub-arch has lwsync */
> #if defined(CONFIG_PPC64) || defined(CONFIG_PPC_E500MC)
> -#define SMPWMB LWSYNC
> +#undef rmb
> +#undef wmb
> +/* Redefine rmb() to lwsync. */
> +#define r
On 2023-04-12 12:34:13, Kautuk Consul wrote:
> Hi,
>
> On 2023-04-11 16:35:10, Michael Ellerman wrote:
> > Kautuk Consul writes:
> > > On 2023-04-07 09:01:29, Sean Christopherson wrote:
> > >> On Fri, Apr 07, 2023, Bagas Sanjaya wrote:
> > >> &g
On 2023-03-30 10:59:19, Michael Ellerman wrote:
> Kautuk Consul writes:
> > On 2023-03-28 23:02:09, Michael Ellerman wrote:
> >> Kautuk Consul writes:
> >> > On 2023-03-28 15:44:02, Kautuk Consul wrote:
> >> >> On 2023-03-28 20:44:48, Michael El
On 2023-03-27 19:51:34, Nicholas Piggin wrote:
> On Mon Mar 27, 2023 at 7:34 PM AEST, Kautuk Consul wrote:
> > On 2023-03-27 14:58:03, Kautuk Consul wrote:
> > > On 2023-03-27 19:19:37, Nicholas Piggin wrote:
> > > > On Thu Mar 16, 2023 at 3:1
kvmppc_hv_entry isn't called from anywhere other than
book3s_hv_rmhandlers.S itself. Removing .global scope for
this function and annotating it with SYM_CODE_START_LOCAL
and SYM_CODE_END.
Signed-off-by: Kautuk Consul
---
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 4 ++--
1 file changed, 2
On 2023-03-27 15:25:24, Kautuk Consul wrote:
> On 2023-03-27 19:51:34, Nicholas Piggin wrote:
> > On Mon Mar 27, 2023 at 7:34 PM AEST, Kautuk Consul wrote:
> > > On 2023-03-27 14:58:03, Kautuk Consul wrote:
> > > > On 2023-03-27 19:19:37, Nicholas Piggin wrote:
>
kvmppc_hv_entry isn't called from anywhere other than
book3s_hv_rmhandlers.S itself. Removing .global scope for
this function and annotating it with SYM_CODE_START_LOCAL
and SYM_CODE_END.
Signed-off-by: Kautuk Consul
---
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 4 ++--
1 file changed, 2
On 2023-03-27 20:30:02, Nicholas Piggin wrote:
> On Mon Mar 27, 2023 at 8:04 PM AEST, Kautuk Consul wrote:
> > kvmppc_hv_entry isn't called from anywhere other than
> > book3s_hv_rmhandlers.S itself. Removing .global scope for
> > this function and annotating it with
On 2023-03-27 15:04:38, Kautuk Consul wrote:
> On 2023-03-27 14:58:03, Kautuk Consul wrote:
> > On 2023-03-27 19:19:37, Nicholas Piggin wrote:
> > > On Thu Mar 16, 2023 at 3:10 PM AEST, Kautuk Consul wrote:
> > > > kvmppc_hv_entry isn't c
On 2023-03-27 19:19:37, Nicholas Piggin wrote:
> On Thu Mar 16, 2023 at 3:10 PM AEST, Kautuk Consul wrote:
> > kvmppc_hv_entry isn't called from anywhere other than
> > book3s_hv_rmhandlers.S itself. Removing .global scope for
> > this function and annotating it
On 2023-03-27 14:58:03, Kautuk Consul wrote:
> On 2023-03-27 19:19:37, Nicholas Piggin wrote:
> > On Thu Mar 16, 2023 at 3:10 PM AEST, Kautuk Consul wrote:
> > > kvmppc_hv_entry isn't called from anywhere other than
> > > book3s_hv_rmhandlers.S its
On 2023-04-07 09:01:29, Sean Christopherson wrote:
> On Fri, Apr 07, 2023, Bagas Sanjaya wrote:
> > On Fri, Apr 07, 2023 at 05:31:47AM -0400, Kautuk Consul wrote:
> > > I used the unlikely() macro on the return values of the k.alloc
> > > calls and found that it change
Hi,
On 2023-04-11 16:35:10, Michael Ellerman wrote:
> Kautuk Consul writes:
> > On 2023-04-07 09:01:29, Sean Christopherson wrote:
> >> On Fri, Apr 07, 2023, Bagas Sanjaya wrote:
> >> > On Fri, Apr 07, 2023 at 05:31:47AM -0400, Kautuk Consul wrote:
> &
I used the unlikely() macro on the return values of the k.alloc
calls and found that it changes the code generation a bit.
Optimize all return paths of k.alloc calls by improving
branch prediction on return value of k.alloc.
Signed-off-by: Kautuk Consul
---
arch/powerpc/kvm/book3s_hv_nested.c
Hi,
On 2023-03-23 03:47:18, Kautuk Consul wrote:
> kvmppc_vcore_create() might not be able to allocate memory through
> kzalloc. In that case the kvm->arch.online_vcores shouldn't be
> incremented.
> Add a check for kzalloc failure and return with -ENOMEM from
> kvmppc_c
On 2023-03-28 15:44:02, Kautuk Consul wrote:
> On 2023-03-28 20:44:48, Michael Ellerman wrote:
> > Kautuk Consul writes:
> > > kvmppc_vcore_create() might not be able to allocate memory through
> > > kzalloc. In that case the kvm->arch.online_vcores shouldn't be
On 2023-03-28 20:44:48, Michael Ellerman wrote:
> Kautuk Consul writes:
> > kvmppc_vcore_create() might not be able to allocate memory through
> > kzalloc. In that case the kvm->arch.online_vcores shouldn't be
> > incremented.
>
> I agree that looks wrong.
>
&
On 2023-03-28 23:02:09, Michael Ellerman wrote:
> Kautuk Consul writes:
> > On 2023-03-28 15:44:02, Kautuk Consul wrote:
> >> On 2023-03-28 20:44:48, Michael Ellerman wrote:
> >> > Kautuk Consul writes:
> >> > > kvmppc_vcore_create() might not be abl
Hi Everyone,
On 2023-06-08 08:34:48, Kautuk Consul wrote:
> - Enable CONFIG_HAVE_KVM_DIRTY_RING_ACQ_REL as ppc64 is weakly
> ordered.
> - Enable CONFIG_NEED_KVM_DIRTY_RING_WITH_BITMAP because the
> kvmppc_xive_native_set_attr is called in the context of an ioctl
> syscal
Hi Jordan,
On 2023-07-06 14:15:13, Jordan Niethe wrote:
>
>
> On 8/6/23 10:34 pm, Kautuk Consul wrote:
>
> Need at least a little context in the commit message itself:
>
> "Enable ring-based dirty memory tracking on ppc64:"
Sure will take this i
opy ram: 2603645 kbytes
downtime ram: 9254 kbytes
Signed-off-by: Kautuk Consul
---
Documentation/virt/kvm/api.rst | 2 +-
arch/powerpc/include/uapi/asm/kvm.h | 2 ++
arch/powerpc/kvm/Kconfig| 2 ++
arch/powerpc/kvm/book3s.c | 46 +
a
ion to support
the CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT config option.
On testing with live migration it was found that there is around
150-180 ms improvment in overall migration time with this patch.
Signed-off-by: Kautuk Consul
---
Documentation/virt/kvm/api.rst | 2 +-
arch/powerpc/incl
Hi Nick/Gavin/Everyone,
On 2023-06-08 08:34:48, Kautuk Consul wrote:
> - Enable CONFIG_HAVE_KVM_DIRTY_RING_ACQ_REL as ppc64 is weakly
> ordered.
> - Enable CONFIG_NEED_KVM_DIRTY_RING_WITH_BITMAP because the
> kvmppc_xive_native_set_attr is called in the context of an ioctl
62 matches
Mail list logo