From: Nicholas Piggin
THP paths can defer splitting compound pages until after the actual
remap and TLB flushes to split a huge PMD/PUD. This causes radix
partition scope page table mappings to get out of synch with the host
qemu page table mappings.
This results in random memory corruption in t
nel.org
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Nicholas Piggin
Signed-off-by: Paul Mackerras
Signed-off-by: Leonardo Bras
---
arch/powerpc/kvm/book3s_64_mmu_radix.c | 91 +++---
1 file changed, 37 insertions(+), 54 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_64_mmu_
neesh Kumar K.V"
Cc: kvm-...@vger.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Nicholas Piggin
Signed-off-by: Paul Mackerras
(cherry picked from commit 71d29f43b6332badc5598c656616a62575e83342 v4.19)
Signed-off-by: Leonardo Bras
---
arch/powerpc/kvm/book3s_64_mmu_ra
d.oprofile_cpu_type != NULL) {
> t->oprofile_cpu_type = old.oprofile_cpu_type;
> t->oprofile_type = old.oprofile_type;
> + t->cpu_features |= old.cpu_features & CPU_FTR_PMAO_BUG;
> }
> }
>
Looks good to me.
Reviewed-by: Leonardo Bras
signature.asc
Description: This is a digitally signed message part
rable behavior, and should cause no change
if 'movable_node' parameter is not passed to kernel.
Signed-off-by: Leonardo Bras
---
arch/powerpc/kernel/prom.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
index 6620f37abe73..f4
ut some flag combination that work
fine for both use-cases, if PowerVM don't pass 'movable_node' as boot
parameter to kernel, it will behave just as today.
What are your thoughts on that?
Best regards,
Leonardo Bras
signature.asc
Description: This is a digitally signed message part
On Wed, 2020-03-04 at 04:18 -0300, Leonardo Bras wrote:
> Humm, this makes sense.
> But with mu change, these pieces of memory only get into ZONE_MOVABLE
> if the boot parameter 'movable_node' gets passed to guest kernel.
Humm, I think your patch also does that.
> So, ev
Here, checking for this new flag and
marking memblocks as hotplugable memory is enough to get the desirable
behavior.
This should cause no change if 'movable_node' parameter is not passed
in kernel command-line.
Signed-off-by: Leonardo Bras
---
The new flag was already proposed on Power Ar
On Thu, 2020-03-05 at 20:32 -0300, Leonardo Bras wrote:
> I will send the matching qemu change as a reply later.
http://patchwork.ozlabs.org/patch/1249931/
signature.asc
Description: This is a digitally signed message part
spin_until_cond() will wait until nmi_ipi_busy == false, and
nmi_ipi_lock_start() does not seem to change nmi_ipi_busy, so there is
no way this while will ever repeat.
Replace this 'while' by an 'if', so it does not look like it can repeat.
Signed-off-by: Leonardo Bras
---
On Fri, 2020-03-27 at 08:40 +1100, Paul Mackerras wrote:
> On Thu, Mar 26, 2020 at 05:37:52PM -0300, Leonardo Bras wrote:
> > spin_until_cond() will wait until nmi_ipi_busy == false, and
> > nmi_ipi_lock_start() does not seem to change nmi_ipi_busy, so there is
> > no way
kdump may not be saved for crash analysis.
Skip spinlocks after NMI IPI is sent to all other cpus.
Signed-off-by: Leonardo Bras
---
arch/powerpc/include/asm/spinlock.h | 6 ++
arch/powerpc/kexec/crash.c | 3 +++
2 files changed, 9 insertions(+)
diff --git a/arch/powerpc/include/asm
kdump may not be saved for crash analysis.
Skip spinlocks after NMI IPI is sent to all other cpus.
Signed-off-by: Leonardo Bras
---
arch/powerpc/include/asm/spinlock.h | 6 ++
arch/powerpc/kexec/crash.c | 4
2 files changed, 10 insertions(+)
diff --git a/arch/powerpc/include
oops, forgot to EXPORT_SYMBOL.
arch_spin_lock*() is used on modules.
Sending v2.
On Thu, 2020-03-26 at 19:28 -0300, Leonardo Bras wrote:
> During a crash, there is chance that the cpus that handle the NMI IPI
> are holding a spin_lock. If this spin_lock is needed by crashing_cpu it
> w
Hello Christophe, thanks for the feedback.
I noticed an error in this patch and sent a v2, that can be seen here:
http://patchwork.ozlabs.org/patch/1262468/
Comments inline::
On Fri, 2020-03-27 at 07:50 +0100, Christophe Leroy wrote:
> > @@ -142,6 +144,8 @@ static inline void arch_spin_lock(arch
Hello Michael,
On Fri, 2020-03-27 at 14:50 +1100, Michael Ellerman wrote:
> Hi Leonardo,
>
> Leonardo Bras writes:
> > During a crash, there is chance that the cpus that handle the NMI IPI
> > are holding a spin_lock. If this spin_lock is needed by crashing_cpu it
>
Hello Peter,
On Mon, 2020-03-30 at 13:02 +0200, Peter Zijlstra wrote:
> do {
> > > + if (unlikely(crash_skip_spinlock))
> > > + return;
> >
> > You are adding a test that reads a global var in the middle of a so hot path
> > ? That must kill
Hello Christophe,
On Sat, 2020-03-28 at 10:19 +, Christophe Leroy wrote:
> Hi Leonardo,
>
>
> > On 03/27/2020 03:51 PM, Leonardo Bras wrote:
> > >
> > [SNIP]
> > - If the lock is already free, it would change nothing,
> > - Otherwise, the lock will
kdump may not be saved for crash analysis.
After NMI IPI is sent to all other cpus, force unlock all spinlocks
needed for finishing crash routine.
Signed-off-by: Leonardo Bras
---
Changes from v2:
- Instead of skipping spinlocks, unlock the needed ones.
Changes from v1:
- Exported variable
On Thu, 2020-03-05 at 20:32 -0300, Leonardo Bras wrote:
> ---
> The new flag was already proposed on Power Architecture documentation,
> and it's waiting for approval.
>
> I would like to get your comments on this change, but it's still not
> ready for being merged.
Hello Peter,
On Wed, 2020-04-01 at 11:26 +0200, Peter Zijlstra wrote:
> You might want to add a note to your asm/spinlock.h that you rely on
> spin_unlock() unconditionally clearing a lock.
>
> This isn't naturally true for all lock implementations. Consider ticket
> locks, doing a surplus unloc
Hello Bharata, thank you for reviewing and testing!
During review of this new flag, it was suggested to change it's name to
a better one (on platform's viewpoint).
So I will have to change the flag name from DRCONF_MEM_HOTPLUGGED to
DRCONF_MEM_HOTREMOVABLE.
Everything should work the same as to
boot, guest kernel reads the device-tree, early_init_drmem_lmb()
is called for every added LMBs. Here, checking for this new flag and
marking memblocks as hotplugable memory is enough to get the desirable
behavior.
This should cause no change if 'movable_node' parameter is not passed
in kernel comman
ux, all memory is added
with the same flags (ASSIGNED).
To create a solution that doesn't break PowerVM, this new flag was made
necessary.
Best regards,
Leonardo Bras
signature.asc
Description: This is a digitally signed message part
On Fri, 2020-04-03 at 10:31 +1100, Oliver O'Halloran wrote:
> On Fri, Apr 3, 2020 at 10:07 AM Leonardo Bras wrote:
> > Hello Oliver, thank you for the feedback.
> > Comments inline:
> >
> > On Fri, 2020-04-03 at 09:46 +1100, Oliver O'Halloran wrote:
> &g
On Thu, 2020-04-02 at 22:28 +1100, Michael Ellerman wrote:
> Leonardo Bras writes:
> > During a crash, there is chance that the cpus that handle the NMI IPI
> > are holding a spin_lock. If this spin_lock is needed by crashing_cpu it
> > will cause a deadlock. (rtas.lock and
Hello Bharata,
On Fri, 2020-04-03 at 20:08 +0530, Bharata B Rao wrote:
> The patch would be more complete with the following change that ensures
> that DRCONF_MEM_HOTREMOVABLE flag is set for non-boot-time hotplugged
> memory too. This will ensure that ibm,dynamic-memory-vN property
> reflects the
message, and avoid locking
logbuf_lock.
Signed-off-by: Leonardo Bras
---
arch/powerpc/kexec/crash.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/powerpc/kexec/crash.c b/arch/powerpc/kexec/crash.c
index d488311efab1..9b73e3991bf4 100644
--- a/arch/powerpc/kexec/crash.c
+++ b/arch/powerpc
Fixes a possible 'use after free' of kvm variable in
kvm_vm_ioctl_create_spapr_tce, where it does a mutex_unlock(&kvm->lock)
after a kvm_put_kvm(kvm).
Signed-off-by: Leonardo Bras
---
arch/powerpc/kvm/book3s_64_vio.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
iables on code, relying
more in the contents of kvm struct.
Signed-off-by: Leonardo Bras
---
arch/powerpc/kvm/e500_mmu_host.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c
index 321db0fdb9db..425d138066
tce_release (from v1)
- Fixes possible 'use after free' on kvm_vm_ioctl_create_spapr_tce
- Fixes undeclared variable error
Build test:
- https://travis-ci.org/LeoBras/linux-ppc/builds/608807573
Leonardo Bras (4):
powerpc/kvm/book3s: Fixes possible 'use after release' of k
iables on code, relying
more in the contents of kvm struct.
Signed-off-by: Leonardo Bras
---
arch/powerpc/kvm/book3s_64_mmu_hv.c | 10 +-
arch/powerpc/kvm/book3s_64_vio.c| 10 ++
arch/powerpc/kvm/book3s_hv.c| 10 +-
3 files changed, 16 insertions(+), 14 del
iables on code, relying
more in the contents of kvm struct.
Signed-off-by: Leonardo Bras
---
arch/powerpc/kvm/booke.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index be9a45874194..fd7bdb4f8f87 100644
--- a/arch/powerpc/kv
On Thu, 2019-11-14 at 20:07 +1100, Michael Ellerman wrote:
> On Fri, 2019-08-02 at 13:39:15 UTC, Leonardo Bras wrote:
> > Changes the return variable to bool (as the return value) and
> > avoids doing a ternary operation before returning.
> >
> > Signed-off-by: Leonard
On Tue, 2019-11-12 at 15:57 +1100, Michael Ellerman wrote:
> Hi Leonardo,
Hello Micheal, thanks for the feedback!
>
> Leonardo Bras writes:
> > Fixes a possible 'use after free' of kvm variable in
> > kvm_vm_ioctl_create_spapr_tce, where it does a mutex_unlo
On Thu, 2019-11-14 at 15:43 -0300, Leonardo Bras wrote:
> > If the kvm_put_kvm() you've moved actually caused the last
> > reference
> > to
> > be dropped that would mean that our caller had passed us a kvm
> > struct
> > without holding a reference to it,
Fixes a possible 'use after free' of kvm variable.
It does use mutex_unlock(&kvm->lock) after possible freeing a variable
with kvm_put_kvm(kvm).
Signed-off-by: Leonardo Bras
---
arch/powerpc/kvm/book3s_64_vio.c | 3 +--
virt/kvm/kvm_main.c | 8
2
)
- Fixes possible 'use after free' on kvm_vm_ioctl_create_spapr_tce
- Fixes undeclared variable error
Leonardo Bras (2):
powerpc/kvm/book3s: Replace current->mm by kvm->mm
powerpc/kvm/book3e: Replace current->mm by kvm->mm
arch/powerpc/kvm/book3s_64_mmu_hv.c | 4 ++--
arch/powe
iables on code, relying
more in the contents of kvm struct.
Signed-off-by: Leonardo Bras
---
arch/powerpc/kvm/book3s_64_mmu_hv.c | 4 ++--
arch/powerpc/kvm/book3s_64_vio.c| 10 ++
arch/powerpc/kvm/book3s_hv.c| 10 +-
3 files changed, 13 insertions(+), 11 deletions(-)
d
iables on code, relying
more in the contents of kvm struct.
Signed-off-by: Leonardo Bras
---
arch/powerpc/kvm/booke.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index be9a45874194..fd7bdb4f8f87 100644
--- a/arch/powerpc/kv
Result of Travis-CI testing the change:
https://travis-ci.org/LeoBras/linux-ppc/builds/617712012
signature.asc
Description: This is a digitally signed message part
On Wed, 2019-11-27 at 17:40 +0100, Paolo Bonzini wrote:
> >
> >if (ret >= 0)
> >list_add_rcu(&stt->list, &kvm->arch.spapr_tce_tables);
> > - else
> > - kvm_put_kvm(kvm);
> >
> >mutex_unlock(&kvm->lock);
> >
> >if (ret >= 0)
> >
On Thu, 2019-11-28 at 09:57 +1100, Paul Mackerras wrote:
> There isn't a potential use-after-free here. We are relying on the
> property that the release function (kvm_vm_release) cannot be called
> in parallel with this function. The reason is that this function
> (kvm_vm_ioctl_create_spapr_tce)
On Wed, 2019-11-27 at 17:40 +0100, Paolo Bonzini wrote:
> > diff --git a/arch/powerpc/kvm/book3s_64_vio.c
> > b/arch/powerpc/kvm/book3s_64_vio.c
> > index 5834db0a54c6..a402ead833b6 100644
> > --- a/arch/powerpc/kvm/book3s_64_vio.c
> > +++ b/arch/powerpc/kvm/book3s_64_vio.c
> > @@ -316,14 +316,13
On Sun, 2019-12-01 at 22:45 -0800, Ram Pai wrote:
> @@ -206,8 +224,7 @@ static int tce_buildmulti_pSeriesLP(struct iommu_table
> *tbl, long tcenum,
> * from iommu_alloc{,_sg}()
> */
> if (!tcep) {
> - tcep = (__be64 *)__get_free_page(GFP_ATOMIC);
> -
On Thu, 2020-04-02 at 22:28 +1100, Michael Ellerman wrote:
> Leonardo Bras
> TBH I think we could just drop that printk() entirely.
>
> Or we could tell printk() that we're in NMI context so that it uses the
> percpu buffers.
>
> We should probably do the latter a
Hello Michael,
Would it be ok to add this patch for 5.7 ? Or too late?
Regards,
On Tue, 2020-04-07 at 09:30 +0530, Bharata B Rao wrote:
> On Mon, Apr 06, 2020 at 12:41:01PM -0300, Leonardo Bras wrote:
> > Hello Bharata,
> >
> > On Fri, 2020-04-03 at 20:08 +0530, Bharata B
4658acf2a06d851feb2855933)
On the other hand, busting the rtas.lock could be dangerous, because
it's code we can't control.
According with LoPAR, for both of these rtas-calls, we have:
For the PowerPC External Interrupt option: The call must be reentrant
to the number of processors on the
On Wed, 2020-04-08 at 22:21 +1000, Michael Ellerman wrote:
[...]
> > On the other hand, busting the rtas.lock could be dangerous, because
> > it's code we can't control.
> >
> > According with LoPAR, for both of these rtas-calls, we have:
> >
> > For the PowerPC External Interrupt option: The cal
On Wed, 2020-04-08 at 22:21 +1000, Michael Ellerman wrote:
> We should be able to just allocate the rtas_args on the stack, it's only
> ~80 odd bytes. And then we can use rtas_call_unlocked() which doesn't
> take the global lock.
At this point, would it be a problem using kmalloc?
Best regards,
; with the logbuf lock held.
Oh, I thought the CPUs would start crashing after crash_send_ipi(), so
only printk() after that would possibly deadlock.
I was not able to see how the printk() above would deadlock, but I see
no problem adding that at the start of the function.
Best regards,
Leonardo Bras
signature.asc
Description: This is a digitally signed message part
can be useful to avoid deadlocks in crashing, where rtas-calls are
needed, but some other thread crashed holding the rtas.lock.
Signed-off-by: Leonardo Bras
---
arch/powerpc/include/asm/rtas.h | 1 +
arch/powerpc/kernel/rtas.c | 21 +
arch/
on and reentrant versions. But it seemed like unnecessary
overhead, since the current calls are very few and very straight.
What do you think on this?
Best regards,
Leonardo Bras
signature.asc
Description: This is a digitally signed message part
ntiate_rtas()).
> In the old days we had to make sure the RTAS argument buffer was
> below the 4GB point. If that's still necessary then perhaps putting
> rtas_args inside the PACA would be the way to go.
Yes, we still need to make sure of this. I will study more about PACA
and try
the message,
and avoid locking logbuf_lock.
Suggested-by: Michael Ellerman
Signed-off-by: Leonardo Bras
---
Changes since v1:
- Added in-code comment explaining the need of context change
- Function moved to the start of default_machine_crash_shutdown,
to avoid locking any printk on crashing
checkpath.pl, replace uint8_t for u8, and keep
the same type pattern for the whole file, as they are the same
according to powerpc/boot/types.h.
Signed-off-by: Leonardo Bras
---
arch/powerpc/include/asm/rtas-types.h | 124 ++
arch/powerpc/include/asm/rtas.h | 118
it is used on .S files, just leaving the parameter as is.
However, I have noticed no difference in the generated binary after this
change.
Signed-off-by: Leonardo Bras
---
arch/powerpc/include/asm/firmware.h | 75 ++---
1 file changed, 37 insertions(+), 38 deletions(-)
Sorry, there is a typo on my commit message.
's/BIT_MASK/BIT/'
On Thu, 2019-06-13 at 15:02 -0300, Leonardo Bras wrote:
> The main reason of this change is to make these bitmasks more readable.
>
> The macro ASM_CONST() just appends an UL to it's parameter, so it can
I noticed these nested ifs can be easily replaced by switch-cases,
which can improve readability.
Signed-off-by: Leonardo Bras
---
.../platforms/pseries/hotplug-memory.c| 26 +--
1 file changed, 18 insertions(+), 8 deletions(-)
diff --git a/arch/powerpc/platforms
Changes the return variable to bool (as the return value) and
avoids doing a ternary operation before returning.
Also, since rc will always be true, there is no need to do
rc &= bool, as (true && X) will result in X.
Signed-off-by: Leonardo Bras
---
arch/powerpc/platforms/pse
On Fri, 2019-08-02 at 22:26 +1000, Michael Ellerman wrote:
> Leonardo Bras writes:
> > I noticed these nested ifs can be easily replaced by switch-cases,
> > which can improve readability.
> >
> > Signed-off-by: Leonardo Bras
> > ---
> > .../platfor
Changes the return variable to bool (as the return value) and
avoids doing a ternary operation before returning.
Signed-off-by: Leonardo Bras
---
Changes in v2:
- Restore previous and-ing logic on rc.
arch/powerpc/platforms/pseries/hotplug-memory.c | 6 +++---
1 file changed, 3 insertions
On Fri, 2019-08-02 at 09:23 +0200, David Hildenbrand wrote:
> subtle changes in a "Change rc variable to bool"
> patch should be avoided.
You are right.
If it was a valid change, I should give it a patch for itself.
I will keep that in mind next time.
Thanks for helping!
signature.asc
Descript
this.
In this case, qemu does free the vhost IOTLB entry, which fixes the bug.
Signed-off-by: Leonardo Bras
---
arch/powerpc/kvm/book3s_64_vio.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/powerpc/kvm/book3s_64_vio.c b/arch/powerpc/kvm/book3s_64_vio.c
index 883a66e76638..841eff3
B that hits.
Not sure if that's the best approach to find the related vhost_dev's.
What do you think?
Best regards,
Leonardo Bras
signature.asc
Description: This is a digitally signed message part
pus + 500GB), I could see munmap's time
reduction from 275 seconds to 39ms.
Signed-off-by: Leonardo Bras
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 25 +---
arch/powerpc/include/asm/book3s/64/radix.h | 3 ++-
arch/powerpc/mm/book3s64/radix_pgtable.c | 6 +++--
3 file
ess_pgtbl_walk()
Fixed behavior of decrementing before last ptep was used
Link: http://patchwork.ozlabs.org/patch/1163093/
Special thanks for:
Aneesh Kumar, Nick Piggin, Paul Mackerras, Michael Ellerman, Fabiano Rosas,
Dipankar Sarma and Oliver O'Halloran.
Leonardo Bras (11):
asm-g
ed just to make sure there is no speculative
read outside the interrupt disabled area. Other than that, it is not
supposed to have any change of behavior from current code.
It is planned to allow arch-specific versions, so that additional steps can
be added while keeping the code clean.
Signed-off-by
inside
{begin,end}_lockless_pgtbl_walk, there should be no change in the
workings.
Signed-off-by: Leonardo Bras
---
mm/gup.c | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/mm/gup.c b/mm/gup.c
index 1b521e0ac1de..04e6f46993b6 100644
--- a/mm/gup.c
+++ b/mm/gup.c
argument that allows interrupt enable/disable to be skipped:
__begin_lockless_pgtbl_walk() and __end_lockless_pgtbl_walk().
Functions similar to the generic ones are also exported, by calling
the above functions with parameter {en,dis}able_irq = true.
Signed-off-by: Leonardo Bras
---
arch/powerpc
Applies the new functions used for tracking lockless pgtable walks on
addr_to_pfn().
local_irq_{save,restore} is already inside {begin,end}_lockless_pgtbl_walk,
so there is no need to repeat it here.
Signed-off-by: Leonardo Bras
---
arch/powerpc/kernel/mce_power.c | 6 +++---
1 file changed, 3
lose meaning now it's not directly passed to local_irq_* functions.
Signed-off-by: Leonardo Bras
---
arch/powerpc/perf/callchain.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
index cbc251981209..fd
,restore} is already inside {begin,end}_lockless_pgtbl_walk,
so there is no need to repeat it here.
Variable that saves the irq mask was renamed from flags to irq_mask so it
doesn't lose meaning now it's not directly passed to local_irq_* functions.
Signed-off-by: Leonardo Bras
---
arch/
,
so there is no need to repeat it here.
Variable that saves the irq mask was renamed from flags to irq_mask so it
doesn't lose meaning now it's not directly passed to local_irq_* functions.
Signed-off-by: Leonardo Bras
---
arch/powerpc/kvm/e500_mmu_host.c | 9 +
1 file
E = 0
kvmppc_do_h_enter: Fixes where local_irq_restore() must be placed (after
the last usage of ptep).
Given that some of these functions can be called in real mode, and others
always are, we use __{begin,end}_lockless_pgtbl_walk so we can decide when
to disable interrupts.
Signed-off-by: Leonardo
pgtbl_walk() to mimic the effect of local_irq_enable().
Signed-off-by: Leonardo Bras
---
arch/powerpc/kvm/book3s_64_mmu_hv.c| 6 ++---
arch/powerpc/kvm/book3s_64_mmu_radix.c | 34 +++---
arch/powerpc/kvm/book3s_64_vio_hv.c| 6 -
3 files changed, 39 insertions(
barrier in the functions
- Any counter can be read by any CPU
Due to not locking nor using atomic variables, the impact on the
lockless pagetable walk is intended to be minimum.
Signed-off-by: Leonardo Bras
---
arch/powerpc/mm/book3s64/pgtable.c | 18 ++
1 file changed, 18 insertions(+
cting
too much on the lockless pagetable walk.
Signed-off-by: Leonardo Bras
---
arch/powerpc/mm/book3s64/pgtable.c | 16 +++-
1 file changed, 15 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/book3s64/pgtable.c
b/arch/powerpc/mm/book3s64/pgtable.c
index bb138b6
On Thu, 2020-02-06 at 00:08 -0300, Leonardo Bras wrote:
> gup_pgd_range(addr, end, gup_flags, pages, &nr);
> - local_irq_enable();
> + end_lockless_pgtbl_walk(IRQS_ENABLED);
> ret = nr;
> }
>
Just notic
how it works. If you notice something I am missing, please
let me know.
Best regards,
Leonardo Bras
signature.asc
Description: This is a digitally signed message part
lk(flags) \
> do {
> local_irq_save(flags);
> smp_mb();
> } while (0)
>
Makes sense. But wouldn't inlining have the same code output?
Best regards,
Leonardo Bras
signature.asc
Description: This is a digitally signed message part
_ONCE()
> if (pte_present(pte))
> wing=
> else
> ret = -EINVAL;
> }
> end_lockless_pgtbl_walk()
>
> if (ret) {
> pr_err_rate...()
> goto out;
> }
>
>
Sure, looks better that way. I will change that for v7.
+ end_lockless_pgtbl_walk(irq_mask);
> > }
> >
> > /*
> > @@ -1679,16 +1686,16 @@ u16 get_mm_addr_key(struct mm_struct *mm, unsigned
> > long address)
> > {
> > pte_t *ptep;
> > u16 pkey = 0;
> > - unsigned long flags;
>
f what it
does mean.
For other commits, I added:
"Variable that saves the irq mask was renamed from flags to irq_mask so
it doesn't lose meaning now it's not directly passed to local_irq_*
functions."
I can add it to this commit message.
Thanks for the feedback,
Leonardo Bras
signature.asc
Description: This is a digitally signed message part
On Thu, 2020-02-06 at 06:46 +0100, Christophe Leroy wrote:
>
> Le 06/02/2020 à 04:08, Leonardo Bras a écrit :
> > On powerpc, we need to do some lockless pagetable walks from functions
> > that already have disabled interrupts, specially from real mode with
> > MSR[
m is recommended for clearing high order bits.
rlwinm r10, r10, 0, ~0x0f00 means:
r10 = (r10 << 0) & ~0x0f00
Which does exactly what the comments suggests.
FWIW:
Reviwed-by: Leonardo Bras
signature.asc
Description: This is a digitally signed message part
Before checking for cpu_type == NULL, this same copy happens, so doing
it here will just write the same value to the t->oprofile_type
again.
Remove the repeated copy, as it is unnecessary.
Signed-off-by: Leonardo Bras
---
arch/powerpc/kernel/cputable.c | 1 -
1 file changed, 1 delet
ofile_type = old.oprofile_type;
> + if (old.cpu_features & CPU_FTR_PMAO_BUG)
> + t->cpu_features |= CPU_FTR_PMAO_BUG;
What are your thoughts about doing:
t->cpu_features |= old.cpu_features & CPU_FTR_PMAO_BUG;
Also, I would re
break;
> case 0x004e: /* POWER9 bits 12-15 give chip type */
> + case 0x004f: /* POWER9P bits 12-15 give chip type */
> maj = (pvr >> 8) & 0x0F;
> min
break;
> case 0x004e: /* POWER9 bits 12-15 give chip type */
> + case 0x004f: /* POWER9P bits 12-15 give chip type */
> maj = (pvr >> 8) & 0x0F;
> min
old.cpu_features & CPU_FTR_PMAO_BUG;
Also, I would recommend adding a short comment on top of the added
lines explaining why it is needed.
Best regards,
Leonardo Bras
signature.asc
Description: This is a digitally signed message part
0
On ISA, rlwinm is recommended for clearing high order bits.
rlwinm r10, r10, 0, ~0x0f00 means:
r10 = (r10 << 0) & ~0x0f00
Which does exactly what the comments suggests.
FWIW:
Reviwed-by: Leonardo Bras
signature.asc
Description: This is a digitally signed message part
printf("Returned address is %p\n", addr);
> check_bytes(addr);
> - write_bytes(addr);
> - ret = read_bytes(addr);
> + write_bytes(addr, length);
> + ret = read_bytes(addr, length);
>
> /* munmap() length of MAP_HUGETLB memory must be hugepage ali
On Sat, 2020-02-15 at 11:17 +0100, Christophe Leroy wrote:
>
> Le 15/02/2020 à 07:28, Leonardo Bras a écrit :
> > On Sun, 2020-02-09 at 18:14 +, Christophe Leroy wrote:
> > > In ITLB miss handled the line supposed to clear bits 20-23 on the
> > > L2 ITLB entry is
On Sat, 2020-02-15 at 03:49 -0300, Leonardo Bras wrote:
> Hello Christophe, thank you for the patch.
>
> On Thu, 2020-02-06 at 08:42 +, Christophe Leroy wrote:
> > Commit fa7b9a805c79 ("tools/selftest/vm: allow choosing mem size and
> > page size in map_hugetlb&
On Sat, 2020-02-15 at 03:23 -0300, Leonardo Bras wrote:
> Mahesh Salgaonkar writes:
>
> Hello Mahesh,
>
> > POWER9P PVR bits are same as that of POWER9. Hence mask off only the
> > relevant bits for the major revision similar to POWER9.
> >
> > Without this
On Mon, 2020-02-17 at 09:33 +1100, Michael Neuling wrote:
> On Sat, 2020-02-15 at 02:36 -0300, Leonardo Bras wrote:
> > Before checking for cpu_type == NULL, this same copy happens, so doing
> > it here will just write the same value to the t->oprofile_type
> > again.
>
type = old.oprofile_type;
> > }
>
> The action being reduced to a single line, the { } should be removed.
>
> Christophe
I intentionally let it this way because I just reviewed a patch that
will add more itens here, and should be merged before this one.
This will avo
On Fri, 2020-02-07 at 01:38 -0300, Leonardo Bras wrote:
> > Why not make them static inline just like the generic ones ?
> >
>
> Sure, can be done. It would save some function calls.
> For that I will define the per-cpu variable in .c and declare it in .h
> All new fun
Hello John, comments inline;
On Fri, 2020-02-07 at 14:54 -0800, John Hubbard wrote:
> On 2/5/20 7:25 PM, Leonardo Bras wrote:
> > On Thu, 2020-02-06 at 00:08 -0300, Leonardo Bras wrote:
> > > gup_pgd_range(addr, end, gup_flags, pages, &nr);
> > > -
1 - 100 of 433 matches
Mail list logo