Create a single function that flushes everything (FP, VMX, VSX, SPE).
Doing this all at once means we only do one MSR write.
Signed-off-by: Anton Blanchard <an...@samba.org>
---
arch/powerpc/include/asm/switch_to.h | 1 +
arch/powerpc/kernel/process.c| 22 ++
Most of __switch_to() is housekeeping, TLB batching, timekeeping etc.
Move these away from the more complex and critical context switching
code.
Signed-off-by: Anton Blanchard <an...@samba.org>
---
arch/powerpc/kernel/process.c | 52 +--
1 file chang
Similar to the non TM load_up_*() functions, don't disable the MSR
bits on the way out.
Signed-off-by: Anton Blanchard <an...@samba.org>
---
arch/powerpc/kernel/fpu.S| 4
arch/powerpc/kernel/vector.S | 4
2 files changed, 8 deletions(-)
diff --git a/arch/powerpc/kernel/f
We used to allow giveup_*() to be called with a NULL task struct
pointer. Now those cases are handled in the caller we can remove
the checks. We can also remove giveup_altivec_notask() which is also
unused.
Signed-off-by: Anton Blanchard <an...@samba.org>
---
arch/powerpc/include/asm/switc
Create helper functions to set and clear MSR bits after first
checking if they are already set. Grouping them will make it
easy to avoid the MSR writes in a subsequent optimisation.
Signed-off-by: Anton Blanchard <an...@samba.org>
---
arch/powerpc/kernel/process.c
for a debug boot option that
does this and catches bad uses in other areas of the kernel.
Signed-off-by: Anton Blanchard <an...@samba.org>
---
arch/powerpc/crypto/aes-spe-glue.c | 1 +
arch/powerpc/crypto/sha1-spe-glue.c | 1 +
arch/powerpc/crypto/sha256-spe-glue.c| 1 +
arch/p
Add a boot option that strictly manages the MSR unavailable bits.
This catches kernel uses of FP/Altivec/SPE that would otherwise
corrupt user state.
Signed-off-by: Anton Blanchard <an...@samba.org>
---
Documentation/kernel-parameters.txt | 6 ++
arch/powerpc/include/asm/reg.h
UP and SMP, but
in preparation for that remove these UP only optimisations.
Signed-off-by: Anton Blanchard <an...@samba.org>
---
arch/powerpc/include/asm/processor.h | 6 --
arch/powerpc/include/asm/switch_to.h | 8 ---
arch/powerpc/kernel/fpu.S| 35 ---
arch/powerpc/
microbenchmark using yield():
http://ozlabs.org/~anton/junkcode/context_switch2.c
./context_switch2 --test=yield --fp 0 0
shows an improvement of almost 3% on POWER8.
Signed-off-by: Anton Blanchard <an...@samba.org>
---
arch/powerpc/kernel/entry_64.S | 15 +--
1 file changed, 1 ins
Instead of having multiple giveup_*_maybe_transactional() functions,
separate out the TM check into a new function called
check_if_tm_restore_required().
This will make it easier to optimise the giveup_*() functions in a
subsequent patch.
Signed-off-by: Anton Blanchard <an...@samba.
mtmsrd_isync() will do an mtmsrd followed by an isync on older
processors. On newer processors we avoid the isync via a feature fixup.
Signed-off-by: Anton Blanchard <an...@samba.org>
---
arch/powerpc/include/asm/reg.h | 8
arch/powerpc/kernel/process.c
Move the MSR modification into new c functions. Removing it from
the low level functions will allow us to avoid costly MSR writes
by batching them up.
Move the check_if_tm_restore_required() check into these new functions.
Signed-off-by: Anton Blanchard <an...@samba.org>
---
arch/p
More consolidation of our MSR available bit handling.
Signed-off-by: Anton Blanchard <an...@samba.org>
---
arch/powerpc/include/asm/processor.h | 2 --
arch/powerpc/kernel/fpu.S| 16
arch/powerpc/kernel/process.c| 6 --
arch/powerpc/kernel/ve
an improvement of 3% on POWER8.
Signed-off-by: Anton Blanchard <an...@samba.org>
---
arch/powerpc/include/asm/switch_to.h | 1 +
arch/powerpc/kernel/process.c| 75
arch/powerpc/kvm/book3s_pr.c | 17 +---
3 files changed, 63 insertions(
Remove a bunch of unnecessary fallback functions and group
things in a more logical way.
Signed-off-by: Anton Blanchard <an...@samba.org>
---
arch/powerpc/include/asm/switch_to.h | 39 ++--
1 file changed, 11 insertions(+), 28 deletions(-)
diff --git
Hi,
> On Sat, Sep 26, 2015 at 04:30:08PM +0200, Torsten Duwe wrote:
> > As I mentioned earlier this year, it's a bad idea to call _mcount
> > from MMU helper functions (e.g. hash_page...), when the
> > profiling/tracing/ live-patching/whatever framewok might in turn
> > cause another such fault.
target kernel can boot whether or not it includes
> > FIXUP_ENDIAN.
> >
> > This mirrors commit 150b14e7 in kexec-lite.
> >
> > Signed-off-by: Samuel Mendoza-Jonas <sam...@au1.ibm.com>
>
> I would value a review from one of the PPC folks.
Looks good
Thanks Joel, all three applied!
Anton
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
Hi,
Here is another instruction trace from a kernel context switch trace.
Quite a lot of register and CR save/restore code.
Regards,
Anton
c02943d8 fsnotify+0x8 mfcrr12
c02943dc fsnotify+0xc std r20,-96(r1)
c02943e0 fsnotify+0x10 std r21,-88(r1)
Hi Bill, Segher,
I agree with Segher. We already know we have opportunities to do a
better job with shrink-wrapping (pushing this kind of useless
activity down past early exits), so having examples of code to look
at to improve this would be useful.
I'll look out for specific examples. I
is right), but this gives us
something to play with.
Anton powerpc: Reduce the number of non volatiles GPRs to 8
This requires a hacked gcc.
Signed-off-by: Anton Blanchard an...@samba.org
--
Index: linux.junk/arch/powerpc/include/asm/exception-64s.h
can
do.
- SPR writes are slow, so check that the value is changing before
writing it.
A context switch microbenchmark using yield():
http://ozlabs.org/~anton/junkcode/context_switch2.c
./context_switch2 --type=yield 0 0
shows an improvement of almost 10% on POWER8.
Signed-off-by: Anton Blanchard
No need to execute mflr twice.
Signed-off-by: Anton Blanchard an...@samba.org
---
arch/powerpc/kernel/entry_64.S | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 8428280..f8779f2 100644
--- a/arch/powerpc
/context_switch2.c
./context_switch2 --type=yield --fp 0 0
shows an improvement of almost 3% on POWER8.
Signed-off-by: Anton Blanchard an...@samba.org
---
arch/powerpc/kernel/entry_64.S | 15 +--
1 file changed, 1 insertion(+), 14 deletions(-)
diff --git a/arch/powerpc/kernel/entry_64
Hi Ian,
Nice catch! I wonder if we should be checking for device_type
memory. Ben?
Yes. That's what Linux does.
Ian: I made that change, and slightly modified your commit message.
Look ok?
Looks good to me :)
Excellent, I just pushed the fix.
Anton
;
...
};
};
};
};
Signed-off-by: Ian Munsie imun...@au1.ibm.com
Signed-off-by: Anton Blanchard an...@samba.org
---
kexec_memory_map.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/kexec_memory_map.c b/kexec_memory_map.c
index fc1b7af..6103590 100644
Hi Ian,
From: Ian Munsie imun...@au1.ibm.com
If the system has a PCI device with a memory-controller device node,
kexec-lite would spew hundreds of double free warnings and eventually
segfault. This would result in a kexec load failed message from
petitboot.
This was due to
Hi Scott,
Is kexec-lite meant to be specific to book3s-64?
It was originally built to test book3s-64 kexec. Likely some other
issues need fixing for other ppc sub arches, but it is nice to
have a very simple kexec.
Anton
___
Linuxppc-dev mailing
Hi Sam,
Older big-endian ppc64 kernels don't include the FIXUP_ENDIAN check,
meaning if we kexec from a little-endian kernel the target kernel will
fail to boot.
Returning to big-endian before we enter the target kernel ensures that
the target kernel can boot whether or not it includes
mtmsr() does the right thing on 32bit and 64bit, so use it everywhere.
Signed-off-by: Anton Blanchard an...@samba.org
---
arch/powerpc/include/asm/reg.h | 3 +--
arch/powerpc/oprofile/op_model_power4.c | 4 ++--
2 files changed, 3 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc
If we take an alignment exception which we cannot fix, the oops
currently prints:
Unable to handle kernel paging request for unknown fault
Lets print something more useful:
Unable to handle kernel paging request for unaligned access at address
0xc000f77bba8f
Signed-off-by: Anton Blanchard
Hi Nikunj,
Thanks for the patch. Have we tested that this doesn't regress the
non dynamic representation?
Yes, that is tested. And works as expected.
Great, you can add:
Acked-by: Anton Blanchard an...@samba.org
Anton
___
Linuxppc-dev mailing
Hi Nikunj,
From: Nikunj A Dadhania nik...@linux.vnet.ibm.com
powerpc/numa: initialize distance lookup table from drconf path
In some situations, a NUMA guest that supports
ibm,dynamic-memory-reconfiguration node will end up having flat NUMA
distances between nodes. This is because of two
__typeof__(*(ptr)), which will hit the
warning if ptr is marked const.
Signed-off-by: Anton Blanchard an...@samba.org
---
arch/powerpc/include/asm/uaccess.h | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/include/asm/uaccess.h
b/arch/powerpc/include/asm/uaccess.h
Hi Laurentiu,
+ if ((TRAP(regs) == 0xf00) regs-result)
+ return true;
+
+ return false;
Why not just
return (TRAP(regs) == 0xf00) regs-result;
Could do, it just read a little easier to my tired eyes.
Anton
___
upstream to fix them.
Anton Blanchard (6):
powerpc: Fix duplicate const clang warning in user access code
powerpc: Only use -mabi=altivec if toolchain supports it
powerpc: Only use -mtraceback=no, -mno-string and -msoft-float if
toolchain supports it
powerpc: Don't use -mno-strict-align
We added -mno-strict-align in commit f036b3681962 (powerpc: Work around little
endian gcc bug) to fix gcc bug http://gcc.gnu.org/bugzilla/show_bug.cgi?id=57134
Clang doesn't understand it. We need to use a conditional because we can't use
the
simpler call cc-option here.
Signed-off-by: Anton
Add a conditional around the code to select various gcc only options:
-mabi=elfv2 vs -mcall-aixdesc, and -mcmodel=medium vs -mminimal-toc.
Signed-off-by: Anton Blanchard an...@samba.org
---
arch/powerpc/Makefile | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/powerpc/Makefile b/arch
The -mabi=altivec option is not recognised on LLVM, so use call cc-option
to check for support.
Signed-off-by: Anton Blanchard an...@samba.org
---
arch/powerpc/lib/Makefile | 2 +-
lib/raid6/Makefile| 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/lib
These options are not recognised on LLVM, so use call cc-option to check
for support.
Signed-off-by: Anton Blanchard an...@samba.org
---
arch/powerpc/Makefile | 7 ---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
index 7a0daad
llvm accepts -fno-delete-null-pointer-checks but complains about it.
Wrap it to avoid getting enormous numbers of warnings.
Also add -no-integrated-as to disable the llvm integrated assembler,
lots of stuff currently relies on gas.
---
Makefile | 5 +
1 file changed, 5 insertions(+)
diff
Signed-off-by: Anton Blanchard an...@samba.org
---
arch/powerpc/include/asm/uaccess.h | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/include/asm/uaccess.h
b/arch/powerpc/include/asm/uaccess.h
index a0c071d..2a8ebae 100644
--- a/arch/powerpc/include/asm
We need to use a trampoline when using LOAD_HANDLER(), because the
destination needs to be in the first 64kB. An absolute branch has
no such limitations, so just jump there.
Signed-off-by: Anton Blanchard an...@samba.org
---
arch/powerpc/kernel/exceptions-64s.S | 2 +-
1 file changed, 1
restoring.
Remove the stale comment and the restore of the LR.
Signed-off-by: Anton Blanchard an...@samba.org
---
arch/powerpc/kernel/exceptions-64s.S | 16
1 file changed, 4 insertions(+), 12 deletions(-)
diff --git a/arch/powerpc/kernel/exceptions-64s.S
b/arch/powerpc/kernel
| | do_vfs_ioctl
| | sys_ioctl
| | system_call
| | __ioctl
| | 0x7e714
| | 0x7e714
Signed-off-by: Anton
Hi Cyril,
These two configs should be identical with the exception of big or
little endian
The big endian version has XMON_DEFAULT turned on while the little
endian has XMON_DEFAULT not set. Enable XMON_DEFAULT for little
endian.
I disabled it on the LE defconfig on purpose. In most cases
restore the DSCR on exit. I'm not sure we need to go to the
trouble of saving and restoring it, but we should at least get it back
to 0 when done.
Also a tiny nit, no need for a newline in perror():
open() failed
: Permission denied
With those changes you can add:
Signed-off-by: Anton Blanchard
...@vger.kernel.org
Signed-off-by: Anton Blanchard an...@samba.org
---
arch/powerpc/kernel/vmlinux.lds.S | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/powerpc/kernel/vmlinux.lds.S
b/arch/powerpc/kernel/vmlinux.lds.S
index f096e72..1db6851 100644
--- a/arch/powerpc/kernel/vmlinux.lds.S
+++ b/arch/powerpc
Signed-off-by: Anton Blanchard an...@samba.org
---
arch/powerpc/configs/ppc64_defconfig | 1 +
arch/powerpc/configs/pseries_defconfig| 1 +
arch/powerpc/configs/pseries_le_defconfig | 1 +
3 files changed, 3 insertions(+)
diff --git a/arch/powerpc/configs/ppc64_defconfig
b/arch/powerpc
We cap 32bit userspace backtraces to PERF_MAX_STACK_DEPTH
(currently 127), but we forgot to do the same for 64bit backtraces.
Cc: sta...@vger.kernel.org
Signed-off-by: Anton Blanchard an...@samba.org
---
arch/powerpc/perf/callchain.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff
Signed-off-by: Anton Blanchard an...@samba.org
---
arch/powerpc/configs/ppc64_defconfig | 1 +
arch/powerpc/configs/pseries_defconfig| 1 +
arch/powerpc/configs/pseries_le_defconfig | 1 +
3 files changed, 3 insertions(+)
diff --git a/arch/powerpc/configs/ppc64_defconfig
b/arch/powerpc
(powerpc/jump_label: Use HAVE_JUMP_LABEL)
Signed-off-by: Anton Blanchard an...@samba.org
---
arch/powerpc/platforms/powernv/opal-wrappers.S | 2 +-
arch/powerpc/platforms/pseries/hvCall.S| 2 +-
arch/powerpc/platforms/pseries/lpar.c | 2 +-
3 files changed, 3 insertions(+), 3 deletions
might
take multiple PMU exceptions per second per hardware thread even
if our hard lockup timeout is 10 seconds.
It can be enabled via a boot option, or via procfs.
Signed-off-by: Anton Blanchard an...@samba.org
---
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/nmi.h | 4
To use jump labels in assembly we need the HAVE_JUMP_LABEL define,
so we select a fallback version if the toolchain does not support
them.
Modify linux/jump_label.h so it can be included by assembly files.
We also need to add -DCC_HAVE_ASM_GOTO to KBUILD_AFLAGS.
Signed-off-by: Anton Blanchard
for OPROFILE_NMI_TIMER to disable it on PPC64.
Signed-off-by: Anton Blanchard an...@samba.org
---
arch/Kconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/Kconfig b/arch/Kconfig
index 05d7a8a..0cc605d 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -32,7 +32,7 @@ config
ARCH_STATIC_BRANCH in the powerpc asm/jump_label.h
for an example).
Signed-off-by: Anton Blanchard an...@samba.org
---
arch/arm/include/asm/jump_label.h | 5 ++---
arch/arm64/include/asm/jump_label.h | 8
arch/mips/include/asm/jump_label.h | 7 +++
arch/s390/include/asm/jump_label.h
(powerpc/jump_label: Use HAVE_JUMP_LABEL)
Signed-off-by: Anton Blanchard an...@samba.org
---
arch/powerpc/platforms/powernv/opal-wrappers.S | 2 +-
arch/powerpc/platforms/pseries/hvCall.S| 2 +-
arch/powerpc/platforms/pseries/lpar.c | 2 +-
3 files changed, 3 insertions(+), 3 deletions
To use jump labels in assembly we need the HAVE_JUMP_LABEL define,
so we select a fallback version if the toolchain does not support
them.
Modify linux/jump_label.h so it can be included by assembly files.
We also need to add -DCC_HAVE_ASM_GOTO to KBUILD_AFLAGS.
Signed-off-by: Anton Blanchard
ARCH_STATIC_BRANCH in the powerpc asm/jump_label.h
for an example).
Signed-off-by: Anton Blanchard an...@samba.org
---
arch/arm/include/asm/jump_label.h | 5 ++---
arch/arm64/include/asm/jump_label.h | 8
arch/mips/include/asm/jump_label.h | 7 +++
arch/s390/include/asm/jump_label.h
A simple kernel module was used to create concurrent WARNs and BUGs:
http://ozlabs.org/~anton/junkcode/warnstorm.tar.gz
Signed-off-by: Anton Blanchard an...@samba.org
---
arch/powerpc/kernel/traps.c | 44
1 file changed, 36 insertions(+), 8 deletions
Remove another version of a recursive lock in dump_stack.
Signed-off-by: Anton Blanchard an...@samba.org
---
lib/dump_stack.c | 40
1 file changed, 4 insertions(+), 36 deletions(-)
diff --git a/lib/dump_stack.c b/lib/dump_stack.c
index 6745c62..f64ee3c
the series:
A trivial module to create concurrent WARNs, BUGs and oopses:
http://ozlabs.org/~anton/junkcode/warnstorm.tar.gz
And one to create concurrent soft and hard lockups:
http://ozlabs.org/~anton/junkcode/badguy.tar.gz
Anton Blanchard (7):
Add die_spin_lock_{irqsave,irqrestore
Many architectures have their own oops locking code that allows
the lock to be taken recursively. Create a common version.
Avoid creating generic locking functions, so they can't be
abused in other parts of the kernel.
Signed-off-by: Anton Blanchard an...@samba.org
---
include/linux/die_lock.h
Replace the powerpc specific oops locking with the common one.
Signed-off-by: Anton Blanchard an...@samba.org
---
arch/powerpc/kernel/traps.c | 24 +++-
1 file changed, 3 insertions(+), 21 deletions(-)
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
Replace the ARM specific oops locking with the common one.
Signed-off-by: Anton Blanchard an...@samba.org
---
arch/arm/kernel/traps.c | 26 +++---
1 file changed, 3 insertions(+), 23 deletions(-)
diff --git a/arch/arm/kernel/traps.c b/arch/arm/kernel/traps.c
index 788e23f
Replace the x86 specific oops locking with the common one.
Signed-off-by: Anton Blanchard an...@samba.org
---
arch/x86/kernel/dumpstack.c | 26 +++---
1 file changed, 3 insertions(+), 23 deletions(-)
diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
A simple kernel module was used to create concurrent soft and
hard lockups:
http://ozlabs.org/~anton/junkcode/badguy.tar.gz
Signed-off-by: Anton Blanchard an...@samba.org
---
kernel/watchdog.c | 4
1 file changed, 4 insertions(+)
diff --git a/kernel/watchdog.c b/kernel/watchdog.c
index
is the
VMX register definitions - the kernel uses vrX whereas both gcc and
glibc use vX.
Change the kernel to match userspace.
Signed-off-by: Anton Blanchard an...@samba.org
---
arch/powerpc/include/asm/ppc_asm.h | 64 +++---
arch/powerpc/include/uapi/asm/ptrace.h | 2
is the
VSX register definitions - the kernel uses vsrX whereas gcc uses
vsX.
Change the kernel to match userspace.
Signed-off-by: Anton Blanchard an...@samba.org
---
arch/powerpc/include/asm/ppc_asm.h | 128 ++---
arch/powerpc/lib/ldstfp.S | 6 +-
2 files changed
Hi Aneesh,
yes. We do use jump label. I also verified that looking at .s
#APP
# 23 ./arch/powerpc/include/asm/jump_label.h 1
1:
nop
.pushsection __jump_table, aw
.llong 1b, .L201, __tracepoint_hash_fault+8 #,
.popsection
# 0 2
So we
Hi,
ebizzy with -S 30 -t 1 -P gave
13627 records/s - Without patch
13546 records/s - With patch with tracepoint disabled
OK. So that's about -0.6%. Are we happy with that? I'm not sure.
Can you do a few more runs and see if that's a stable result.
Surprisingly large. Is
Hi Robert,
I also don't see a reason, why you don't want to support oprofile NMI
timer. Is there any?
I couldn't come up with a case where it would be a benefit to us. We
roll out PMU support for a new CPU early so that the kernel and tools
support it when we GA. On the other hand adding
HAVE_PERF_EVENTS_NMI is used for two things - the oprofile NMI timer
and the hard lockup detector.
Create HAVE_OPROFILE_NMI_TIMER so an architecture can select them
separately. On ppc64 we want to add the hard lockup detector, but not
the oprofile NMI timer fallback.
Signed-off-by: Anton
Hi Arnd,
Would it help to also add a way for an architecture to override
memcmp_pages() with its own implementation? That way you could
skip the unaligned part, hardcode the loop counter and avoid the
preempt_disable() in kmap_atomic().
Good idea. We could also have a generic implementation
might
take multiple PMU exceptions per second per hardware thread even
if our hard lockup timeout is 10 seconds.
It can be enabled via a boot option, or via procfs.
Signed-off-by: Anton Blanchard an...@samba.org
---
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/nmi.h | 4
HAVE_PERF_EVENTS_NMI is used for two things - the oprofile NMI timer
and the hardlockup detector.
Create HAVE_OPROFILE_NMI_TIMER so an architecture can select them
separately. On ppc64 we want to add the hardlockup detector, but not
the oprofile NMI timer fallback.
Signed-off-by: Anton Blanchard
.
Signed-off-by: Anton Blanchard an...@samba.org
---
arch/powerpc/lib/Makefile| 3 +-
arch/powerpc/lib/memcmp_64.S | 233 +++
arch/powerpc/lib/string.S| 2 +
3 files changed, 237 insertions(+), 1 deletion(-)
create mode 100644 arch/powerpc/lib
Add a testcase for the new ppc64 memcmp.
Signed-off-by: Anton Blanchard an...@samba.org
---
.../testing/selftests/powerpc/stringloops/Makefile | 21 +
.../selftests/powerpc/stringloops/asm/ppc_asm.h| 7 ++
.../selftests/powerpc/stringloops/memcmp_64.S | 1 +
.../selftests
This was enabled on the pseries defconfigs recently, but missed
the ppc64 one.
Signed-off-by: Anton Blanchard an...@samba.org
---
arch/powerpc/configs/ppc64_defconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/powerpc/configs/ppc64_defconfig
b/arch/powerpc/configs/ppc64_defconfig
Regenerate defconfigs using make savedefconfig.
Anton Blanchard an...@samba.org
---
arch/powerpc/configs/ppc64_defconfig | 11 +--
arch/powerpc/configs/pseries_defconfig| 12 +++-
arch/powerpc/configs/pseries_le_defconfig | 14 +++---
3 files changed, 7
Enable config options required by lxc and docker.
Signed-off-by: Anton Blanchard an...@samba.org
---
arch/powerpc/configs/pseries_defconfig| 12
arch/powerpc/configs/pseries_le_defconfig | 12
2 files changed, 24 insertions(+)
diff --git a/arch/powerpc/configs
Signed-off-by: Anton Blanchard an...@samba.org
---
arch/powerpc/configs/ppc64_defconfig | 1 +
arch/powerpc/configs/pseries_defconfig| 1 +
arch/powerpc/configs/pseries_le_defconfig | 1 +
3 files changed, 3 insertions(+)
diff --git a/arch/powerpc/configs/ppc64_defconfig
b/arch/powerpc
docker requires CONFIG_NETFILTER_XT_MARK to be enabled.
Unfortunately that means turning on CONFIG_NETFILTER_ADVANCED.
Signed-off-by: Anton Blanchard an...@samba.org
---
arch/powerpc/configs/pseries_defconfig| 22 +-
arch/powerpc/configs/pseries_le_defconfig | 22
KSM will only be used on areas marked for merging via madvise, and it
is showing nice improvements on KVM workloads, so enable it by
default.
Signed-off-by: Anton Blanchard an...@samba.org
---
arch/powerpc/configs/ppc64_defconfig | 1 +
arch/powerpc/configs/pseries_defconfig| 1 +
arch
We are starting to see ppc64 boxes with SATA AHCI adapters in it,
so enable it in our defconfigs.
Signed-off-by: Anton Blanchard an...@samba.org
---
arch/powerpc/configs/ppc64_defconfig | 1 +
arch/powerpc/configs/pseries_defconfig| 1 +
arch/powerpc/configs/pseries_le_defconfig | 1
Hi David,
The unrolled loop (deleted) looks excessive.
On a modern cpu with multiple execution units you can usually
manage to get the loop overhead to execute in parallel to the
actual 'work'.
So I suspect that a much simpler 'word at a time' loop will be
almost as fast - especially in the
Just over 17x faster.
Signed-off-by: Anton Blanchard an...@samba.org
---
arch/powerpc/lib/Makefile| 3 +-
arch/powerpc/lib/memcmp_64.S | 233 +++
arch/powerpc/lib/string.S| 2 +
3 files changed, 237 insertions(+), 1 deletion(-)
create mode
Add a testcase for the new ppc64 memcmp.
Signed-off-by: Anton Blanchard an...@samba.org
---
.../testing/selftests/powerpc/stringloops/Makefile | 21 +
.../selftests/powerpc/stringloops/asm/ppc_asm.h| 7 ++
.../selftests/powerpc/stringloops/memcmp_64.S | 1 +
.../selftests
Hi Steve,
Have you tested this on other archs? Because just looking at x86, it
doesn't seem that asm/jump_label.h can handle being called in
assembly.
Since no one is including linux/jump_label.h in assembly yet, nothing
should break. We could however add __ASSEMBLY__ protection to all the
ARCH_STATIC_BRANCH in the powerpc asm/jump_label.h
for an example).
Signed-off-by: Anton Blanchard an...@samba.org
---
arch/arm/include/asm/jump_label.h | 5 ++---
arch/arm64/include/asm/jump_label.h | 8
arch/mips/include/asm/jump_label.h | 7 +++
arch/s390/include/asm/jump_label.h
To use jump labels in assembly we need the HAVE_JUMP_LABEL define,
so we select a fallback version if the toolchain does not support
them.
Modify linux/jump_label.h so it can be included by assembly files.
We also need to add -DCC_HAVE_ASM_GOTO to KBUILD_AFLAGS.
Signed-off-by: Anton Blanchard
(powerpc/jump_label: Use HAVE_JUMP_LABEL)
Signed-off-by: Anton Blanchard an...@samba.org
---
arch/powerpc/platforms/powernv/opal-wrappers.S | 2 +-
arch/powerpc/platforms/pseries/hvCall.S| 2 +-
arch/powerpc/platforms/pseries/lpar.c | 1 +
3 files changed, 3 insertions(+), 2 deletions
(powernv: Add OPAL tracepoints)
Cc: sta...@vger.kernel.org # v3.17+
Signed-off-by: Anton Blanchard an...@samba.org
---
arch/powerpc/platforms/powernv/opal-wrappers.S | 1 -
1 file changed, 1 deletion(-)
diff --git a/arch/powerpc/platforms/powernv/opal-wrappers.S
b/arch/powerpc/platforms/powernv/opal
To use jump labels in assembly we need the HAVE_JUMP_LABEL define,
so we select a fallback version if the toolchain does not support
them.
Modify linux/jump_label.h so it can be included by assembly files.
We also need to add -DCC_HAVE_ASM_GOTO to KBUILD_AFLAGS.
Signed-off-by: Anton Blanchard
(powerpc/jump_label: Use HAVE_JUMP_LABEL)
Signed-off-by: Anton Blanchard an...@samba.org
---
arch/powerpc/platforms/pseries/hvCall.S | 2 +-
arch/powerpc/platforms/pseries/lpar.c | 1 +
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/platforms/pseries/hvCall.S
b/arch/powerpc
Hi Alan,
Right. This is really an rs6000 backend bug. We describe one of the
indirect calls that go wrong here as
(call_insn 108 107 109 13 (parallel [
(set (reg:DI 3 3)
(call (mem:SI (reg:DI 288) [0 *_67 S4 A8])
(const_int 64 [0x40])))
Hi Anshuman,
Yeah I wanted to convert all these tests which are related to DSCR
into individual self tests for powerpc. All these test cases have
Anton Blanchard and IBM's copyright on it but they are licensed with
GPL V2. Not sure whether Anton needs to okay this before I can modify
them
On Thu, 18 Dec 2014 16:11:54 +1100
Michael Ellerman m...@ellerman.id.au wrote:
On Wed, 2014-12-17 at 02:16 +0100, Alexander Graf wrote:
On 31.10.14 04:47, Anton Blanchard wrote:
LLVM doesn't support local named register variables and is
unlikely to. current_thread_info is using one, fix
Hi Alex,
Git bisect managed to point me to this commit as the offender for
OOPSes on e5500 and e6500 (and maybe the G4 as well, not sure).
Doing a git revert of this commit on top of linus/master makes things
work fine for me again.
Ouch, sorry for that, I'll work to reproduce. What gcc
Hi Ingo,
So we cannot call set_task_cpu() because in the normal life time
of a task the -cpu value gets set on wakeup. So if a task is
blocked right now, and its affinity changes, it ought to get a
correct -cpu selected on wakeup. The affinity mask and the
current value of -cpu getting
201 - 300 of 1164 matches
Mail list logo