Currently, if printk lock (logbuf_lock) is held by other thread during
crash, there is a chance of deadlocking the crash on next printk, and
blocking a possibly desired kdump.
At the start of default_machine_crash_shutdown, make printk enter
NMI context, as it will use per-cpu buffers to store
On Wed, May 06, 2020 at 10:50:04PM -0700, Prakhar Srivastava wrote:
> Hi Mark,
Please don't top post.
> This patch set currently only address the Pure DT implementation.
> EFI and ACPI implementations will be posted in subsequent patchsets.
>
> The logs are intended to be carried over the kexec
On 2020-05-12 01:25, Greg KH wrote:
On Tue, May 12, 2020 at 09:22:15AM +0200, Jiri Slaby wrote:
On 11. 05. 20, 9:39, Greg KH wrote:
> On Mon, May 11, 2020 at 12:23:58AM -0700, rana...@codeaurora.org wrote:
>> On 2020-05-09 23:48, Greg KH wrote:
>>> On Sat, May 09, 2020 at 06:30:56PM -0700,
On Mon, May 04, 2020 at 01:38:28PM -0700, Prakhar Srivastava wrote:
> Introduce a device tree layer for to read and store ima buffer
> from the reserved memory section of a device tree.
But why do I need 'a layer of abstraction'? I don't like them.
> Signed-off-by: Prakhar Srivastava
> ---
>
On Sun, 10 May 2020 00:54:58 PDT (-0700), Christoph Hellwig wrote:
RISC-V needs almost no cache flushing routines of its own. Rely on
asm-generic/cacheflush.h for the defaults.
Also remove the pointless __KERNEL__ ifdef while we're at it.
---
arch/riscv/include/asm/cacheflush.h | 62
With a 64K page size flush with start and end value as below
(start, end) = (721f680d, 721f680e) results in
(hstart, hend) = (721f6820, 721f6800)
Avoid doing a __tlbie_va_range with the wrong hstart and hend value in this
case.
__tlbie_va_range will skip the actual tlbie
On 11.05.20 19:47, Srikar Dronamraju wrote:
> * David Hildenbrand [2020-05-08 15:42:12]:
>
> Hi David,
>
> Thanks for the steps to tryout.
>
>>>
>>> #! /bin/bash
>>> sudo x86_64-softmmu/qemu-system-x86_64 \
>>> --enable-kvm \
>>> -m 4G,maxmem=20G,slots=2 \
>>> -smp
On 11. 05. 20, 9:39, Greg KH wrote:
> On Mon, May 11, 2020 at 12:23:58AM -0700, rana...@codeaurora.org wrote:
>> On 2020-05-09 23:48, Greg KH wrote:
>>> On Sat, May 09, 2020 at 06:30:56PM -0700, rana...@codeaurora.org wrote:
On 2020-05-06 02:48, Greg KH wrote:
> On Mon, Apr 27, 2020 at
On Sat, May 09, 2020 at 08:07:14AM -0700, Dan Williams wrote:
> > which are all used in the I/O submission path (generic_make_request /
> > generic_make_request_checks). This is mostly a prep cleanup patch to
> > also remove the pointless queue argument from ->make_request - then
> > ->queue is
On Tue, May 12, 2020 at 09:22:15AM +0200, Jiri Slaby wrote:
> On 11. 05. 20, 9:39, Greg KH wrote:
> > On Mon, May 11, 2020 at 12:23:58AM -0700, rana...@codeaurora.org wrote:
> >> On 2020-05-09 23:48, Greg KH wrote:
> >>> On Sat, May 09, 2020 at 06:30:56PM -0700, rana...@codeaurora.org wrote:
>
Implement rtas_call_reentrant() for reentrant rtas-calls:
"ibm,int-on", "ibm,int-off",ibm,get-xive" and "ibm,set-xive".
On LoPAPR Version 1.1 (March 24, 2016), from 7.3.10.1 to 7.3.10.4,
items 2 and 3 say:
2 - For the PowerPC External Interrupt option: The * call must be
reentrant to the number
Hello Nathan, thanks for the feedback!
On Fri, 2020-04-10 at 14:28 -0500, Nathan Lynch wrote:
> Leonardo Bras writes:
> > Implement rtas_call_reentrant() for reentrant rtas-calls:
> > "ibm,int-on", "ibm,int-off",ibm,get-xive" and "ibm,set-xive".
> >
> > On LoPAPR Version 1.1 (March 24, 2016),
Catalin Marinas writes:
> On Mon, May 11, 2020 at 09:15:55PM +1000, Michael Ellerman wrote:
>> Qian Cai writes:
>> > kvmppc_pmd_alloc() and kvmppc_pte_alloc() allocate some memory but then
>> > pud_populate() and pmd_populate() will use __pa() to reference the newly
>> > allocated memory. The
Architectures like ppc64 provide persistent memory specific barriers
that will ensure that all stores for which the modifications are
written to persistent storage by preceding dcbfps and dcbstps
instructions have updated persistent storage before any data
access or data transfer caused by
POWER10 introduces two new variants of dcbf instructions (dcbstps and dcbfps)
that can be used to write modified locations back to persistent storage.
Additionally, POWER10 also introduce phwsync and plwsync which can be used
to establish order of these writes to persistent storage.
This patch
Excerpts from Aneesh Kumar K.V's message of May 13, 2020 1:06 pm:
> With a 64K page size flush with start and end value as below
> (start, end) = (721f680d, 721f680e) results in
> (hstart, hend) = (721f6820, 721f6800)
>
> Avoid doing a __tlbie_va_range with the wrong hstart and
On Tue, May 12, 2020 at 12:20:13PM -0700, Matthew Wilcox wrote:
> On Tue, May 12, 2020 at 09:44:13PM +0300, Mike Rapoport wrote:
> > diff --git a/arch/alpha/kernel/proto.h b/arch/alpha/kernel/proto.h
> > index a093cd45ec79..701a05090141 100644
> > --- a/arch/alpha/kernel/proto.h
> > +++
On Tue, May 12, 2020 at 12:24:41PM -0700, Matthew Wilcox wrote:
> On Tue, May 12, 2020 at 09:44:18PM +0300, Mike Rapoport wrote:
> > +++ b/include/linux/pgtable.h
> > @@ -28,6 +28,24 @@
> > #define USER_PGTABLES_CEILING 0UL
> > #endif
> >
> > +/* FIXME: */
>
> Fix you what? Add
Start using dcbstps; phwsync; sequence for flushing persistent memory range.
Even though the new instructions are implemented as a variant of dcbf and
hwsync and on
POWER9 they will be executed as those instructions, we still avoid using them on
older hardware. This helps to avoid difficult to
Qian Cai writes:
> kvmppc_pmd_alloc() and kvmppc_pte_alloc() allocate some memory but then
> pud_populate() and pmd_populate() will use __pa() to reference the newly
> allocated memory. The same is in xive_native_provision_pages().
Can you please split this into two patches, one for the KVM
In order to get any rtas* struct into other headers, including rtas.h
may cause a lot of errors, regarding include dependency needed for
inline functions.
Create rtas-types.h and move there all type/struct definitions
from rtas.h, then include rtas-types.h into rtas.h.
Also, as suggested by
v2:
http://patchwork.ozlabs.org/project/linuxppc-dev/patch/20200513044025.105379-2-leobra...@gmail.com/
(Series:
http://patchwork.ozlabs.org/project/linuxppc-dev/list/?series=176534)
signature.asc
Description: This is a digitally signed message part
On Tue, May 12, 2020 at 06:59:36PM +0530, Srikar Dronamraju wrote:
> Node id queried from the static device tree may not
> be correct. For example: it may always show 0 on a shared processor.
> Hence prefer the node id queried from vphn and fallback on the device tree
> based node id if vphn query
Excerpts from Leonardo Bras's message of May 13, 2020 7:45 am:
> Currently, if printk lock (logbuf_lock) is held by other thread during
> crash, there is a chance of deadlocking the crash on next printk, and
> blocking a possibly desired kdump.
>
> At the start of default_machine_crash_shutdown,
Excerpts from Michael Ellerman's message of May 11, 2020 10:58 pm:
> +void hpt_do_stress(unsigned long ea, unsigned long access,
> +unsigned long rflags, unsigned long hpte_group)
> +{
> + unsigned long last_group;
> + int cpu = raw_smp_processor_id();
> +
> +
nvdimm expect the flush routines to just mark the cache clean. The barrier
that mark the store globally visible is done in nvdimm_flush().
Update the papr_scm driver to a simplified nvdim_flush callback that do
only the required barrier.
Signed-off-by: Aneesh Kumar K.V
---
of_pmem on POWER10 can now use phwsync instead of hwsync to ensure
all previous writes are architecturally visible for the platform
buffer flush.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/cacheflush.h | 10 ++
1 file changed, 10 insertions(+)
diff --git
Hello Nick, thanks for your feedback.
Comments inline:
On Wed, 2020-05-13 at 14:36 +1000, Nicholas Piggin wrote:
> Excerpts from Leonardo Bras's message of May 13, 2020 7:45 am:
> > Currently, if printk lock (logbuf_lock) is held by other thread during
> > crash, there is a chance of deadlocking
On Tue, May 12, 2020 at 10:48:41AM +0800, Shengjiu Wang wrote:
> On Wed, May 6, 2020 at 10:33 AM Shengjiu Wang wrote:
> > On Fri, May 1, 2020 at 6:23 PM Mark Brown wrote:
> > > > EDMA requires the period size to be multiple of maxburst. Otherwise
> > > > the remaining bytes are not transferred
Changelog v3:->v4:
- Resolved comments from Christopher.
Link v3:
http://lore.kernel.org/lkml/20200501031128.19584-1-sri...@linux.vnet.ibm.com/t/#u
Changelog v2:->v3:
- Resolved comments from Gautham.
Link v2:
Node id queried from the static device tree may not
be correct. For example: it may always show 0 on a shared processor.
Hence prefer the node id queried from vphn and fallback on the device tree
based node id if vphn query fails.
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux...@kvack.org
Cc:
PVR value of 0x0F06 means we are arch v3.10 compliant (i.e. POWER10).
Signed-off-by: Cédric Le Goater
Signed-off-by: Alistair Popple
---
arch/powerpc/include/asm/cputable.h | 16 ++--
arch/powerpc/include/asm/mmu.h| 1 +
arch/powerpc/include/asm/prom.h | 1 +
A Powerpc system with multiple possible nodes and with CONFIG_NUMA
enabled always used to have a node 0, even if node 0 does not any cpus
or memory attached to it. As per PAPR, node affinity of a cpu is only
available once its present / online. For all cpus that are possible but
not present,
Currently Linux kernel with CONFIG_NUMA on a system with multiple
possible nodes, marks node 0 as online at boot. However in practice,
there are systems which have node 0 as memoryless and cpuless.
This can cause numa_balancing to be enabled on systems with only one node
with memory and CPUs.
Paul Mackerras writes:
> On Wed, Apr 08, 2020 at 10:21:29PM +1000, Michael Ellerman wrote:
>>
>> We should be able to just allocate the rtas_args on the stack, it's only
>> ~80 odd bytes. And then we can use rtas_call_unlocked() which doesn't
>> take the global lock.
>
> Do we instantiate a
Two new future architectural features requiring HWCAP bits are being
developed. Once allocated in the kernel firmware can enable these via
device tree cpu features.
Signed-off-by: Alistair Popple
---
arch/powerpc/include/uapi/asm/cputable.h | 2 ++
1 file changed, 2 insertions(+)
diff --git
This series brings together three previously posted patches required for
POWER10 support and introduces a new patch enabling POWER10 architected
mode.
Alistair Popple (4):
powerpc: Add new HWCAP bits
powerpc: Add base support for ISA v3.1
powerpc/dt_cpu_ftrs: Advertise support for ISA v3.1
The processing clock is different for platforms, so it is better
to set ASR76K and ASR56K based on processing clock, rather than
hard coding the value for them.
Signed-off-by: Shengjiu Wang
Signed-off-by: Mihai Serban
---
sound/soc/fsl/fsl_asrc.c | 15 ++-
1 file changed, 10
* David Hildenbrand [2020-05-12 09:49:05]:
> On 11.05.20 19:47, Srikar Dronamraju wrote:
> > * David Hildenbrand [2020-05-08 15:42:12]:
> >
> >
> > [root@localhost ~]# cat /sys/devices/system/node/online
> > 0
> > [root@localhost ~]# cat /sys/devices/system/node/possible
> > 0-1
> >
> > Even
Enable Advertising support for cpu feature ISA v3.1 if advertised in the
device-tree.
Signed-off-by: Alistair Popple
---
arch/powerpc/kernel/dt_cpu_ftrs.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/arch/powerpc/kernel/dt_cpu_ftrs.c
b/arch/powerpc/kernel/dt_cpu_ftrs.c
index
Newer ISA versions are enabled by clearing all bits in the PCR
associated with previous versions of the ISA. Enable ISA v3.1 support
by updating the PCR mask to include ISA v3.0. This ensures all PCR
bits corresponding to earlier architecture versions get cleared
thereby enabling ISA v3.1.
On Mon, May 11, 2020 at 07:43:30AM -0400, Qian Cai wrote:
> On May 11, 2020, at 7:15 AM, Michael Ellerman wrote:
> > There is kmemleak_alloc_phys(), which according to the docs can be used
> > for tracking a phys address.
> >
> > Did you try that?
>
> Catalin, feel free to give your thoughts
Hi Christoph,
On 10/5/20 5:54 pm, Christoph Hellwig wrote:
m68knommu needs almost no cache flushing routines of its own. Rely on
asm-generic/cacheflush.h for the defaults.
Signed-off-by: Christoph Hellwig
Acked-by: Greg Ungerer
Regards
Greg
---
arch/m68k/include/asm/cacheflush_no.h
Hi Christoph,
On 10/5/20 5:55 pm, Christoph Hellwig wrote:
load_flat_file works on user addresses.
Signed-off-by: Christoph Hellwig
Acked-by: Greg Ungerer
Regards
Greg
---
fs/binfmt_flat.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/binfmt_flat.c
On Tue, 12 May 2020, Srikar Dronamraju wrote:
> +#ifdef CONFIG_NUMA
> + [N_ONLINE] = NODE_MASK_NONE,
Again. Same issue as before. If you do this then you do a global change
for all architectures. You need to put something in the early boot
sequence (in a non architecture specific way) that
On Fri, 1 May 2020 16:12:06 +0800, Shengjiu Wang wrote:
> Add new compatible string "fsl,imx8qm-esai" in the binding document.
>
> Signed-off-by: Shengjiu Wang
> ---
> Documentation/devicetree/bindings/sound/fsl,esai.txt | 1 +
> 1 file changed, 1 insertion(+)
>
Acked-by: Rob Herring
From: Mike Rapoport
All architectures define pmd_index() as
(address >> PMD_SHIFT) & (PTRS_PER_PMD - 1)
and all architectures that have at least three-level page tables define
pmd_offset() as an entry in the array of PMDs indexed by the pmd_index().
For the most architectures the
From: Mike Rapoport
All architectures that have at least four-level page tables define
pud_offset() as an entry in the array of PUDs indexed by the pud_index(),
where pud_index() is
(address >> PUD_SHIFT) & (PTRS_PER_PUD - 1)
For the most architectures the pud_offset() implementation
From: Mike Rapoport
All architectures tables define pgd_offset() as an entry in the array of
PGDs indexed by the pgd_index(), where pgd_index() is
(address >> PGD_SHIFT) & (PTRS_PER_PGD - 1)
For the most cases, the pgd_offset() uses mm->pgd as the pointer to the
top-level page
On Tue, May 12, 2020 at 09:44:13PM +0300, Mike Rapoport wrote:
> diff --git a/arch/alpha/kernel/proto.h b/arch/alpha/kernel/proto.h
> index a093cd45ec79..701a05090141 100644
> --- a/arch/alpha/kernel/proto.h
> +++ b/arch/alpha/kernel/proto.h
> @@ -2,8 +2,6 @@
> #include
> #include
>
>
On Tue, May 12, 2020 at 09:44:18PM +0300, Mike Rapoport wrote:
> +++ b/include/linux/pgtable.h
> @@ -28,6 +28,24 @@
> #define USER_PGTABLES_CEILING0UL
> #endif
>
> +/* FIXME: */
Fix you what? Add documentation?
> +static inline pmd_t *pmd_off(struct mm_struct *mm, unsigned long va)
From: Mike Rapoport
Hi,
The low level page table accessors (pXY_index(), pXY_offset()) are
duplicated across all architectures and sometimes more than once. For
instance, we have 31 definition of pgd_offset() for 25 supported
architectures.
Most of these definitions are actually identical and
From: Mike Rapoport
The replacement of with made the include
of the latter in the middle of asm includes. Fix this up with the aid of
the below script and manual adjustments here and there.
import sys
import re
if len(sys.argv) is not 3:
print "USAGE: %s
From: Mike Rapoport
The cache_page() and nocache_page() functions are only used by the morotola
MMU variant for setting caching attributes for the page table pages.
Move the definitions of these functions from
arch/m68k/include/asm/motorola_pgtable.h closer to their usage in
From: Mike Rapoport
The powerpc 32-bit implementation of pgtable has nice shortcuts for
accessing kernel PMD and PTE for a given virtual address.
Make this helpers available for all architectures.
Signed-off-by: Mike Rapoport
---
arch/arc/mm/highmem.c | 10 +---
From: Mike Rapoport
The linux/mm.h header includes to allow inlining of the
functions involving page table manipulations, e.g. pte_alloc() and
pmd_alloc(). So, there is no point to explicitly include in
the files that include .
The include statements in such cases are remove with a simple
From: Mike Rapoport
All architectures use pXd_index() to get an entry in the page table page
corresponding to a virtual address.
Align csky with other architectures.
Signed-off-by: Mike Rapoport
---
arch/csky/include/asm/pgtable.h | 5 ++---
arch/csky/mm/fault.c| 2 +-
From: Mike Rapoport
The comment about page table allocation functions resides in
include/asm/motorola_pgtable.h while the functions live in
include/asm/motorola_pgaloc.h.
Move the comment close to the code.
Signed-off-by: Mike Rapoport
---
arch/m68k/include/asm/motorola_pgalloc.h | 6 ++
From: Mike Rapoport
There are three cases for the trampoline initialization:
* 32-bit does nothing
* 64-bit with kaslr disabled simply copies a PGD entry from the direct map
to the trampoline PGD
* 64-bit with kaslr enabled maps the real mode trampoline at PUD level
These cases are currently
From: Mike Rapoport
All architectures define pte_index() as
(address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)
and all architectures define pte_offset_kernel() as an entry
in the array of PTEs indexed by the pte_index().
For the most architectures the pte_offset_kernel() implementation
60 matches
Mail list logo