Le 19/09/2018 à 01:07, Joel Stanley a écrit :
This partially reverts faa16bc404d72a5 ("lib: Use existing define with
polynomial").
The cleanup added a dependency on include/linux, which broke the PowerPC
boot wrapper/decompresser when KERNEL_XZ is enabled:
BOOTCC
On 9/18/18 10:27 PM, Christophe Leroy wrote:
In order to allow the 8xx to handle pte_fragments, this patch
extends the use of pte_fragments to nohash/32 platforms.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/mmu-40x.h | 1 +
arch/powerpc/include/asm/mmu-44x.h
On 9/18/18 10:27 PM, Christophe Leroy wrote:
There is no point in taking the page table lock as
pte_frag is always NULL when we have only one fragment.
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/pgtable-frag.c | 3 +++
1 file changed, 3 insertions(+)
diff --git
From: YueHaibing
Date: Tue, 18 Sep 2018 14:35:47 +0800
> The method ndo_start_xmit() is defined as returning an 'netdev_tx_t',
> which is a typedef for an enum type, so make sure the implementation in
> this driver has returns 'netdev_tx_t' value, and change the function
> return type to
In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.
[1]
On Tue, 2018-09-18 at 16:58 -0500, Bjorn Helgaas wrote:
> On Wed, Sep 12, 2018 at 11:55:26AM -0500, Bjorn Helgaas wrote:
> > From: Bjorn Helgaas
> >
> > The original PCI error recovery functionality was for the powerpc-specific
> > IBM EEH feature. PCIe subsequently added some similar features,
On Tue, Sep 18, 2018 at 01:48:16PM +0200, David Hildenbrand wrote:
> Reading through the code and studying how mem_hotplug_lock is to be used,
> I noticed that there are two places where we can end up calling
> device_online()/device_offline() - online_pages()/offline_pages() without
> the
Hi Christophe,
On Thu, 2018-09-13 at 10:21 +0200, Christophe LEROY wrote:
>
> Le 11/09/2018 à 00:17, Radu Rendec a écrit :
> >
> > The MPC83xx also has a watchdog and the kernel driver (mpc8xxx_wdt.c)
> > could also be improved to support the WDIOC_GETBOOTSTATUS ioctl and
> > properly report if
This partially reverts faa16bc404d72a5 ("lib: Use existing define with
polynomial").
The cleanup added a dependency on include/linux, which broke the PowerPC
boot wrapper/decompresser when KERNEL_XZ is enabled:
BOOTCC arch/powerpc/boot/decompress.o
In file included from
On Tue, 2018-09-11 at 10:12 +0800, andy.t...@nxp.com wrote:
> From: Yuantian Tang
>
> The compatible string is not correct in the clock node.
> The clocks property refers to the wrong node too.
> This patch is to fix them.
>
> Signed-off-by: Tang Yuantian
> ---
>
On Wed, Sep 12, 2018 at 11:55:26AM -0500, Bjorn Helgaas wrote:
> From: Bjorn Helgaas
>
> The original PCI error recovery functionality was for the powerpc-specific
> IBM EEH feature. PCIe subsequently added some similar features, including
> AER and DPC, that can be used on any architecture.
>
On Tue, Sep 18, 2018 at 1:48 PM David Hildenbrand wrote:
>
> add_memory() currently does not take the device_hotplug_lock, however
> is aleady called under the lock from
> arch/powerpc/platforms/pseries/hotplug-memory.c
> drivers/acpi/acpi_memhotplug.c
> to synchronize against CPU
On Tue, Sep 18, 2018 at 1:48 PM David Hildenbrand wrote:
>
> remove_memory() is exported right now but requires the
> device_hotplug_lock, which is not exported. So let's provide a variant
> that takes the lock and only export that one.
>
> The lock is already held in
>
On Tue, Sep 18, 2018 at 05:01:48PM +0300, Sergey Miroshnichenko wrote:
> On 9/18/18 1:59 AM, Bjorn Helgaas wrote:
> > On Mon, Sep 17, 2018 at 11:55:43PM +0300, Sergey Miroshnichenko wrote:
> >> On 9/17/18 8:28 AM, Sam Bobroff wrote:
> >>> On Fri, Sep 14, 2018 at 07:14:01PM +0300, Sergey
On 09/18/18 11:55, Rob Herring wrote:
> On Fri, Sep 14, 2018 at 2:32 PM Frank Rowand wrote:
>>
>> On 09/13/18 13:28, Rob Herring wrote:
>>> Major changes are I2C and SPI bus checks, YAML output format (for
>>> future validation), some new libfdt functions, and more libfdt
>>> validation of dtbs.
This is the continuation of my work to sort out signaling of exceptions
with siginfo. The old functions by passing siginfo resulted in many
cases of fields of siginfo that were not initialized and then passed to
userspace, and also resulted in callers getting confused and
initializing the wrong
On Fri, Sep 14, 2018 at 2:32 PM Frank Rowand wrote:
>
> On 09/13/18 13:28, Rob Herring wrote:
> > Major changes are I2C and SPI bus checks, YAML output format (for
> > future validation), some new libfdt functions, and more libfdt
> > validation of dtbs.
> >
> > The YAML addition adds an optional
Signed-off-by: "Eric W. Biederman"
---
arch/powerpc/kernel/process.c | 9 +---
arch/powerpc/mm/fault.c | 9 +---
arch/powerpc/platforms/cell/spu_base.c| 4 ++--
arch/powerpc/platforms/cell/spufs/fault.c | 26 +++
4 files changed,
Call force_sig_pkuerr directly instead of rolling it by hand
in _exception_pkey.
Signed-off-by: "Eric W. Biederman"
---
arch/powerpc/kernel/traps.c | 10 +-
1 file changed, 1 insertion(+), 9 deletions(-)
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index
Now that _exception no longer calls _exception_pkey it is no longer
necessary to handle any signal with any si_code. All pkey exceptions
are SIGSEGV with paired with SEGV_PKUERR. So just handle
that case and remove the now unnecessary parameters from _exception_pkey.
Signed-off-by: "Eric W.
On Tue, Sep 18, 2018 at 10:51:08AM -0700, Darren Hart wrote:
> On Fri, Sep 14, 2018 at 09:57:48PM +0100, Al Viro wrote:
> > On Fri, Sep 14, 2018 at 01:35:06PM -0700, Darren Hart wrote:
> >
> > > Acked-by: Darren Hart (VMware)
> > >
> > > As for a longer term solution, would it be possible to
The callers of _exception don't need the pkey exception logic because
they are not processing a pkey exception. So just call exception_common
directly and then call force_sig_fault to generate the appropriate siginfo
and deliver the appropriate signal.
Signed-off-by: "Eric W. Biederman"
---
It is brittle and wrong to populate si_pkey when there was not a pkey
exception. The field does not exist for all si_codes and in some
cases another field exists in the same memory location.
So factor out the code that all exceptions handlers must run
into exception_common, leaving the
Now that bad_key_fault_exception no longer calls __bad_area_nosemaphore
there is no reason for __bad_area_nosemaphore to handle pkeys.
Signed-off-by: "Eric W. Biederman"
---
arch/powerpc/mm/fault.c | 9 -
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git
This removes the need for other code paths to deal with pkey exceptions.
Signed-off-by: "Eric W. Biederman"
---
arch/powerpc/mm/fault.c | 12 +++-
1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index
There are no callers of __bad_area that pass in a pkey parameter so it makes
no sense to take one.
Signed-off-by: "Eric W. Biederman"
---
arch/powerpc/mm/fault.c | 9 -
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index
In do_sigbus isolate the mceerr signaling code and call
force_sig_mceerr instead of falling through to the force_sig_info that
works for all of the other signals.
Signed-off-by: "Eric W. Biederman"
---
arch/powerpc/mm/fault.c | 18 +++---
1 file changed, 11 insertions(+), 7
On Fri, Sep 14, 2018 at 09:57:48PM +0100, Al Viro wrote:
> On Fri, Sep 14, 2018 at 01:35:06PM -0700, Darren Hart wrote:
>
> > Acked-by: Darren Hart (VMware)
> >
> > As for a longer term solution, would it be possible to init fops in such
> > a way that the compat_ioctl call defaults to
Christophe Leroy writes:
In order to avoid multiple conversions, handover directly a
pgprot_t to map_kernel_page() as already done for radix.
Do the same for __ioremap_caller() and __ioremap_at().
Reviewed-by: Aneesh Kumar K.V
Signed-off-by: Christophe Leroy
---
On the 8xx, the GUARDED attribute of the pages is managed in the
L1 entry, therefore to avoid having to copy it into L1 entry
at each TLB miss, we set it in the PMD.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/nohash/32/pte-8xx.h | 3 ++-
arch/powerpc/kernel/head_8xx.S
On the 8xx, the GUARDED attribute of the pages is managed in the
L1 entry, therefore to avoid having to copy it into L1 entry
at each TLB miss, we have to set it in the PMD
In order to allow this, this patch splits the VM alloc space in two
parts, one for VM alloc and non Guarded IO, and one for
Using this HW assistance implies some constraints on the
page table structure:
- Regardless of the main page size used (4k or 16k), the
level 1 table (PGD) contains 1024 entries and each PGD entry covers
a 4Mbytes area which is managed by a level 2 table (PTE) containing
also 1024 entries each
commit 1bc54c03117b9 ("powerpc: rework 4xx PTE access and TLB miss")
introduced non atomic PTE updates and started the work of removing
PTE updates in TLB miss handlers, but kept PTE_ATOMIC_UPDATES for the
8xx with the following comment:
/* Until my rework is finished, 8xx still needs atomic PTE
In order to allow the 8xx to handle pte_fragments, this patch
extends the use of pte_fragments to nohash/32 platforms.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/mmu-40x.h | 1 +
arch/powerpc/include/asm/mmu-44x.h | 1 +
arch/powerpc/include/asm/mmu-8xx.h
There is no point in taking the page table lock as
pte_frag is always NULL when we have only one fragment.
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/pgtable-frag.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/powerpc/mm/pgtable-frag.c b/arch/powerpc/mm/pgtable-frag.c
In preparation of next patch which generalises the use of
pte_fragment_alloc() for all, this patch moves the related functions
in a place that is common to all subarches.
The 8xx will need that for supporting 16k pages, as in that mode
page tables still have a size of 4k.
Since pte_fragment with
BOOK3S/32 cannot be BOOKE, so remove useless code
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/book3s/32/pgalloc.h | 18 --
arch/powerpc/include/asm/book3s/32/pgtable.h | 14 --
2 files changed, 32 deletions(-)
diff --git
As in PPC64, inline pte_alloc_one() and pte_alloc_one_kernel()
in PPC32. This will allow to switch nohash/32 to pte_fragment
without impacting hash/32.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/book3s/32/pgalloc.h | 22 --
In the same way as PPC64, let's handle pte allocation directly
in kernel_map_page() when slab is not available.
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/pgtable_32.c | 34 +-
1 file changed, 21 insertions(+), 13 deletions(-)
diff --git
As this is running with MMU off, the CPU only does speculative
fetch for code in the same page.
Following the significant size reduction of TLB handler routines,
the side handlers can be brought back close to the main part,
ie in the same page.
Signed-off-by: Christophe Leroy
---
This patch reworks the TLB Miss handler in order to not use r12
register, hence avoiding having to save it into SPRN_SPRG_SCRATCH2.
In the DAR Fixup code we can now use SPRN_M_TW, freeing
SPRN_SPRG_SCRATCH2.
Then SPRN_SPRG_SCRATCH2 may be used for something else in the future.
Signed-off-by:
For using 512k pages with hardware assistance, the PTEs have to be spread
every 128 bytes in the L2 table.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/hugetlb.h | 4 +++-
arch/powerpc/mm/hugetlbpage.c | 13 +
arch/powerpc/mm/tlb_nohash.c | 3 +++
3
Today, on the 8xx the TLB handlers do SW tablewalk by doing all
the calculation in ASM, in order to match with the Linux page
table structure.
The 8xx offers hardware assistance which allows significant size
reduction of the TLB handlers, hence also reduces the time spent
in the handlers.
In preparation of making use of hardware assistance in TLB handlers,
this patch temporarily disables 16K pages and 512K pages. The reason
is that when using HW assistance in 4K pages mode, the linux model
fit with the HW model for 4K pages and 8M pages.
However for 16K pages and 512K mode some
In order to simplify time critical exceptions handling 8xx
specific SW perf counters, this patch moves the counters into
the beginning of memory. This is possible because .text is readable
and the counters are never modified outside of the handlers.
By doing this, we avoid having to set a second
The 8xx TLB miss routines are patched when (de)activating
perf counters.
This patch uses the new patch_site functionality in order
to get a better code readability and avoid a label mess when
dumping the code with 'objdump -d'
Signed-off-by: Christophe Leroy
---
The 8xx TLB miss routines are patched at startup at several places.
This patch uses the new patch_site functionality in order
to get a better code readability and avoid a label mess when
dumping the code with 'objdump -d'
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/mmu-8xx.h |
This reverts commit 4f94b2c7462d9720b2afa7e8e8d4c19446bb31ce.
That commit was buggy, as it used rlwinm instead of rlwimi.
Instead of fixing that bug, we revert the previous commit in order to
reduce the dependency between L1 entries and L2 entries
Fixes: 4f94b2c7462d9 ("powerpc/8xx: Use L1 entry
This patch adds a helper to get the address of a patch_site
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/code-patching.h | 5 +
1 file changed, 5 insertions(+)
diff --git a/arch/powerpc/include/asm/code-patching.h
b/arch/powerpc/include/asm/code-patching.h
index
The purpose of this serie is to implement hardware assistance for TLB table walk
on the 8xx.
First part switches to patch_site instead of patch_instruction,
as it makes the code clearer and avoids pollution with global symbols.
Optimise access to perf counters (hence reduce number of registers
On 09/14/2018 07:36 PM, Hari Bathini wrote:
> Firmware-Assisted Dump (FADump) needs to be registered again after any
> memory hot add/remove operation to update the crash memory ranges. But
> currently, the kernel returns '-EEXIST' if we try to register without
> uregistering it first. This could
On Mon, Sep 17, 2018 at 06:22:50AM +, Christophe Leroy wrote:
> mpc8xxx watchdog driver supports the following platforms:
> - mpc8xx
> - mpc83xx
> - mpc86xx
>
> Those three platforms have a 32 bits register which provides the
> reason of the last boot, including whether it was caused by the
>
On 9/18/18 1:59 AM, Bjorn Helgaas wrote:
> [+cc Russell, Ben, Oliver, linuxppc-dev]
>
> On Mon, Sep 17, 2018 at 11:55:43PM +0300, Sergey Miroshnichenko wrote:
>> Hello Sam,
>>
>> On 9/17/18 8:28 AM, Sam Bobroff wrote:
>>> Hi Sergey,
>>>
>>> On Fri, Sep 14, 2018 at 07:14:01PM +0300, Sergey
On 9/18/18 9:21 AM, David Gibson wrote:
On Mon, Sep 03, 2018 at 10:07:33PM +0530, Aneesh Kumar K.V wrote:
Current code doesn't do page migration if the page allocated is a compound page.
With HugeTLB migration support, we can end up allocating hugetlb pages from
CMA region. Also THP pages can
On 14 September 2018 at 15:31, Arnd Bergmann wrote:
> On Fri, Sep 14, 2018 at 10:33 AM Firoz Khan wrote:
>
>> ---
>> arch/powerpc/kernel/syscalls/Makefile | 51
>> arch/powerpc/kernel/syscalls/syscall_32.tbl | 378
>>
>>
THP pages can get split during different code paths. An incremented reference
count do imply we will not split the compound page. But the pmd entry can be
converted to level 4 pte entries. Keep the code simpler by allowing large
IOMMU page size only if the guest ram is backed by hugetlb pages.
THP pages can get split during different code paths. An incremented reference
count do imply we will not split the compound page. But the pmd entry can be
converted to level 4 pte entries. Keep the code simpler by allowing large
IOMMU page size only if the guest ram is backed by hugetlb pages.
Current code doesn't do page migration if the page allocated is a compound page.
With HugeTLB migration support, we can end up allocating hugetlb pages from
CMA region. Also THP pages can be allocated from CMA region. This patch updates
the code to handle compound pages correctly.
This use the
This helper does a get_user_pages_fast and if it find pages in the CMA area
it will try to migrate them before taking page reference. This makes sure that
we don't keep non-movable pages (due to page reference count) in the CMA area.
Not able to move pages out of CMA area result in CMA allocation
ppc64 use CMA area for the allocation of guest page table (hash page table). We
won't
be able to start guest if we fail to allocate hash page table. We have observed
hash table allocation failure because we failed to migrate pages out of CMA
region
because they were pinned. This happen when we
Le 18/09/2018 à 13:47, Aneesh Kumar K.V a écrit :
Christophe LEROY writes:
Le 17/09/2018 à 11:03, Aneesh Kumar K.V a écrit :
Christophe Leroy writes:
Hi,
I'm having a hard time figuring out the best way to handle the following
situation:
On the powerpc8xx, handling 16k size pages
Let's document the magic a bit, especially why device_hotplug_lock is
required when adding/removing memory and how it all play together with
requests to online/offline memory from user space.
Cc: Jonathan Corbet
Cc: Michal Hocko
Cc: Andrew Morton
Reviewed-by: Pavel Tatashin
Signed-off-by:
Let's perform all checking + offlining + removing under
device_hotplug_lock, so nobody can mess with these devices via
sysfs concurrently.
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Michael Ellerman
Cc: Rashmica Gupta
Cc: Balbir Singh
Cc: Michael Neuling
Reviewed-by: Pavel Tatashin
device_online() should be called with device_hotplug_lock() held.
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Michael Ellerman
Cc: Rashmica Gupta
Cc: Balbir Singh
Cc: Michael Neuling
Reviewed-by: Pavel Tatashin
Signed-off-by: David Hildenbrand
---
There seem to be some problems as result of 30467e0b3be ("mm, hotplug:
fix concurrent memory hot-add deadlock"), which tried to fix a possible
lock inversion reported and discussed in [1] due to the two locks
a) device_lock()
b) mem_hotplug_lock
While add_memory() first takes b),
add_memory() currently does not take the device_hotplug_lock, however
is aleady called under the lock from
arch/powerpc/platforms/pseries/hotplug-memory.c
drivers/acpi/acpi_memhotplug.c
to synchronize against CPU hot-remove and similar.
In general, we should hold the
remove_memory() is exported right now but requires the
device_hotplug_lock, which is not exported. So let's provide a variant
that takes the lock and only export that one.
The lock is already held in
arch/powerpc/platforms/pseries/hotplug-memory.c
drivers/acpi/acpi_memhotplug.c
Reading through the code and studying how mem_hotplug_lock is to be used,
I noticed that there are two places where we can end up calling
device_online()/device_offline() - online_pages()/offline_pages() without
the mem_hotplug_lock. And there are other places where we call
Christophe LEROY writes:
> Le 17/09/2018 à 11:03, Aneesh Kumar K.V a écrit :
>> Christophe Leroy writes:
>>
>>> Hi,
>>>
>>> I'm having a hard time figuring out the best way to handle the following
>>> situation:
>>>
>>> On the powerpc8xx, handling 16k size pages requires to have page tables
On Tue, 18 Sep 2018 10:52:09 +0200
Christophe LEROY wrote:
>
>
> Le 14/09/2018 à 06:22, Nicholas Piggin a écrit :
> > On Fri, 14 Sep 2018 11:14:11 +1000
> > Michael Neuling wrote:
> >
> >> This stops us from doing code patching in init sections after
> >> they've been freed.
> >>
> >> In
Joel Stanley writes:
> On Tue, 18 Sep 2018 at 06:11, Nick Desaulniers
> wrote:
>>
>> On Fri, Sep 14, 2018 at 2:08 PM Segher Boessenkool
>> wrote:
>> >
>> > On Fri, Sep 14, 2018 at 10:47:08AM -0700, Nick Desaulniers wrote:
>> > > On Thu, Sep 13, 2018 at 9:07 PM Joel Stanley wrote:
>> > > >
Christophe LEROY writes:
> Le 18/09/2018 à 07:48, Joel Stanley a écrit :
>> Hey Christophe,
>>
>> On Tue, 18 Sep 2018 at 15:13, Christophe Leroy
>> wrote:
>>>
>>> Since commit cafa0010cd51 ("Raise the minimum required gcc version
>>> to 4.6"), it is not possible to build kernel with GCC lower
Hi Nathan,
On Tue, Sep 18, 2018 at 1:05 AM Nathan Fontenot
wrote:
>
> When performing partition migrations all present CPUs must be online
> as all present CPUs must make the H_JOIN call as part of the migration
> process. Once all present CPUs make the H_JOIN call, one CPU is returned
> to make
On the below patch, checkpatch reports
WARNING: struct kgdb_arch should normally be const
#127: FILE: arch/powerpc/kernel/kgdb.c:480:
+struct kgdb_arch arch_kgdb_ops;
But when I add 'const', I get compilation failure
CC arch/powerpc/kernel/kgdb.o
arch/powerpc/kernel/kgdb.c:480:24:
Generic implementation fails to remove breakpoints after init
when CONFIG_STRICT_KERNEL_RWX is selected:
[ 13.251285] KGDB: BP remove failed: c001c338
[ 13.259587] kgdbts: ERROR PUT: end of test buffer on 'do_fork_test' line 8
expected OK got $E14#aa
[ 13.268969] KGDB: re-enter exception:
Le 14/09/2018 à 06:22, Nicholas Piggin a écrit :
On Fri, 14 Sep 2018 11:14:11 +1000
Michael Neuling wrote:
This stops us from doing code patching in init sections after they've
been freed.
In this chain:
kvm_guest_init() ->
kvm_use_magic_page() ->
fault_in_pages_readable()
On Tue, 2018-09-18 at 10:08 +0200, Mathieu Malaterre wrote:
>
>
> On Wed, Aug 29, 2018 at 10:03 AM Joakim Tjernlund
> wrote:
> >
> > to_tm() hardcodes wday to -1 as "No-one uses the day of the week".
> > But recently rtc driver ds1307 does care and tries to correct wday.
> >
> > Add wday
Hi Laurent,
I am sorry for replying you so late.
The previous LKP test for this case are running on the same Intel skylake 4s
platform, but it need maintain recently.
So I changed to another test box to run the page_fault3 test case, it is Intel
skylake 2s platform (nr_cpu: 104, memory: 64G).
On Wed, 2018-09-12 at 16:40 -0300, Breno Leitao wrote:
> The Documentation/powerpc/transactional_memory.txt says:
>
> "Syscalls made from within a suspended transaction are performed as normal
> and the transaction is not explicitly doomed by the kernel. However,
> what the kernel does to
The method ndo_start_xmit() is defined as returning an 'netdev_tx_t',
which is a typedef for an enum type, so make sure the implementation in
this driver has returns 'netdev_tx_t' value, and change the function
return type to netdev_tx_t.
Found by coccinelle.
Signed-off-by: YueHaibing
---
80 matches
Mail list logo