Nicholas Piggin's on August 20, 2019 3:11 pm:
> There is support for the kernel to execute the 'sc 0' instruction and
> make a system call to itself. This is a relic that is unused in the
> tree, therefore untested. It's also highly questionable for modules to
> be doing this.
Oh I'm sorry this
Interrupts may come from user or kernel, so the stack pointer needs to
be set to either the base of the kernel stack, or a new frame on the
existing kernel stack pointer, respectively.
Using a branch for this can lead to r1-indexed memory operations being
speculatively executed using a value of
There is support for the kernel to execute the 'sc 0' instruction and
make a system call to itself. This is a relic that is unused in the
tree, therefore untested. It's also highly questionable for modules to
be doing this.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kernel/entry_64.S
Le 20/08/2019 à 02:20, Michael Ellerman a écrit :
Nicholas Piggin writes:
Christophe Leroy's on August 14, 2019 6:11 am:
Until vmalloc system is up and running, ioremap basically
allocates addresses at the border of the IOREMAP area.
On PPC32, addresses are allocated down from the top of
On Fri, 2019-08-16 at 15:52 +, Christophe Leroy wrote:
> Resulting code (8xx with 16 bytes per cacheline and 16k pages)
>
> 016c <__flush_dcache_icache_phys>:
> 16c: 54 63 00 22 rlwinm r3,r3,0,0,17
> 170: 7d 20 00 a6 mfmsr r9
> 174: 39 40 04 00 li r10,1024
> 178:
On Mon, Aug 19, 2019 at 04:19:31AM -0500, Segher Boessenkool wrote:
> On Sun, Aug 18, 2019 at 12:13:21PM -0700, Nathan Chancellor wrote:
> > When building pseries_defconfig, building vdso32 errors out:
> >
> > error: unknown target ABI 'elfv1'
> >
> > Commit 4dc831aa8813 ("powerpc: Fix
Hello Bharata,
I have just a couple of small comments.
Bharata B Rao writes:
> +/*
> + * Get a free device PFN from the pool
> + *
> + * Called when a normal page is moved to secure memory (UV_PAGE_IN). Device
> + * PFN will be used to keep track of the secure page on HV side.
> + *
> + *
Santosh Sivaraj's on August 20, 2019 11:47 am:
> Hi Nick,
>
> Nicholas Piggin writes:
>
>> Santosh Sivaraj's on August 15, 2019 10:39 am:
>>> From: Balbir Singh
>>>
>>> The current code would fail on huge pages addresses, since the shift would
>>> be incorrect. Use the correct page shift
Christophe Leroy writes:
> Le 19/08/2019 à 08:28, Daniel Axtens a écrit :
>> In KASAN development I noticed that the powerpc-specific bitops
>> were not being picked up by the KASAN test suite.
>
> I'm not sure anybody cares about who noticed the problem. This sentence
> could be rephrased as:
The powerpc-specific bitops are not being picked up by the KASAN
test suite.
Instrumentation is done via the bitops/instrumented-{atomic,lock}.h
headers. They require that arch-specific versions of bitop functions
are renamed to arch_*. Do this renaming.
For clear_bit_unlock_is_negative_byte,
Currently bitops-instrumented.h assumes that the architecture provides
atomic, non-atomic and locking bitops (e.g. both set_bit and __set_bit).
This is true on x86 and s390, but is not always true: there is a
generic bitops/non-atomic.h header that provides generic non-atomic
operations, and also
Santosh Sivaraj writes:
> This series, which should be based on top of the still un-merged
> "powerpc: implement machine check safe memcpy" series, adds support
> to add the bad blocks which generated an MCE to the NVDIMM bad blocks.
> The next access of the same memory will be blocked by the
On Mon, Aug 19, 2019 at 07:46:00PM +0200, David Sterba wrote:
> Another thing that is lost is the slub debugging support for all
> architectures, because get_zeroed_pages lacking the red zones and sanity
> checks.
>
> I find working with raw pages in this code a bit inconsistent with the
> rest
Subscribe to the MCE notification and add the physical address which
generated a memory error to nvdimm bad range.
Signed-off-by: Santosh Sivaraj
---
drivers/nvdimm/of_pmem.c | 151 +--
1 file changed, 131 insertions(+), 20 deletions(-)
diff --git
Subscribe to the MCE notification and add the physical address which
generated a memory error to nvdimm bad range.
Signed-off-by: Santosh Sivaraj
---
arch/powerpc/platforms/pseries/papr_scm.c | 86 ++-
1 file changed, 85 insertions(+), 1 deletion(-)
diff --git
This is needed to report bad blocks for persistent memory.
Signed-off-by: Santosh Sivaraj
---
arch/powerpc/include/asm/mce.h | 3 +++
arch/powerpc/kernel/mce.c | 15 +++
2 files changed, 18 insertions(+)
diff --git a/arch/powerpc/include/asm/mce.h
This series, which should be based on top of the still un-merged
"powerpc: implement machine check safe memcpy" series, adds support
to add the bad blocks which generated an MCE to the NVDIMM bad blocks.
The next access of the same memory will be blocked by the NVDIMM layer
itself.
---
Santosh
From: Ryan Grimm
Enables running as a secure guest in platforms with an Ultravisor.
Signed-off-by: Ryan Grimm
Signed-off-by: Ram Pai
Signed-off-by: Thiago Jung Bauermann
---
arch/powerpc/configs/ppc64_defconfig | 1 +
arch/powerpc/configs/pseries_defconfig | 1 +
2 files changed, 2
From: Sukadev Bhattiprolu
POWER9 processor includes support for Protected Execution Facility (PEF).
Attached documentation provides an overview of PEF and defines the API
for various interfaces that must be implemented in the Ultravisor
firmware as well as in the KVM Hypervisor.
Based on input
From: Anshuman Khandual
SWIOTLB checks range of incoming CPU addresses to be bounced and sees if
the device can access it through its DMA window without requiring bouncing.
In such cases it just chooses to skip bouncing. But for cases like secure
guests on powerpc platform all addresses need to
Secure guest memory is inacessible to devices so regular DMA isn't
possible.
In that case set devices' dma_map_ops to NULL so that the generic
DMA code path will use SWIOTLB to bounce buffers for DMA.
Signed-off-by: Thiago Jung Bauermann
---
arch/powerpc/platforms/pseries/iommu.c | 11
From: Sukadev Bhattiprolu
Normally, the HV emulates some instructions like MSGSNDP, MSGCLRP
from a KVM guest. To emulate the instructions, it must first read
the instruction from the guest's memory and decode its parameters.
However for a secure guest (aka SVM), the page containing the
From: Ryan Grimm
User space might want to know it's running in a secure VM. It can't do
a mfmsr because mfmsr is a privileged instruction.
The solution here is to create a cpu attribute:
/sys/devices/system/cpu/svm
which will read 0 or 1 based on the S bit of the current CPU.
Signed-off-by:
From: Ram Pai
A new kernel deserves a clean slate. Any pages shared with the hypervisor
is unshared before invoking the new kernel. However there are exceptions.
If the new kernel is invoked to dump the current kernel, or if there is a
explicit request to preserve the state of the current
From: Anshuman Khandual
Secure guests need to share the DTL buffers with the hypervisor. To that
end, use a kmem_cache constructor which converts the underlying buddy
allocated SLUB cache pages into shared memory.
Signed-off-by: Anshuman Khandual
Signed-off-by: Thiago Jung Bauermann
---
From: Anshuman Khandual
LPPACA structures need to be shared with the host. Hence they need to be in
shared memory. Instead of allocating individual chunks of memory for a
given structure from memblock, a contiguous chunk of memory is allocated
and then converted into shared memory. Subsequent
Helps document what the hard-coded number means.
Also take the opportunity to fix an #endif comment.
Suggested-by: Alexey Kardashevskiy
Signed-off-by: Thiago Jung Bauermann
---
arch/powerpc/kernel/paca.c | 11 ++-
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git
From: Sukadev Bhattiprolu
Protected Execution Facility (PEF) is an architectural change for
POWER 9 that enables Secure Virtual Machines (SVMs). When enabled,
PEF adds a new higher privileged mode, called Ultravisor mode, to
POWER architecture.
The hardware changes include the following:
*
From: Ram Pai
These functions are used when the guest wants to grant the hypervisor
access to certain pages.
Signed-off-by: Ram Pai
Signed-off-by: Thiago Jung Bauermann
---
arch/powerpc/include/asm/ultravisor-api.h | 2 ++
arch/powerpc/include/asm/ultravisor.h | 24
From: Ram Pai
Make the Enter-Secure-Mode (ESM) ultravisor call to switch the VM to secure
mode. Pass kernel base address and FDT address so that the Ultravisor is
able to verify the integrity of the VM using information from the ESM blob.
Add "svm=" command line option to turn on switching to
From: Benjamin Herrenschmidt
For secure VMs, the signing tool will create a ticket called the "ESM blob"
for the Enter Secure Mode ultravisor call with the signatures of the kernel
and initrd among other things.
This adds support to the wrapper script for adding that blob via the "-e"
option to
Introduce CONFIG_PPC_SVM to control support for secure guests and include
Ultravisor-related helpers when it is selected
Signed-off-by: Thiago Jung Bauermann
---
arch/powerpc/include/asm/asm-prototypes.h | 2 +-
arch/powerpc/kernel/Makefile | 4 +++-
Hello,
This is a minor update of this patch series. It addresses review comments
made to v3. Details are in the changelog. The sysfs patch is updated and
included here but as I mentioned earlier can be postponed. It is marked
RFC for that reason.
As with the previous version, the patch
From: Claudio Carvalho
The ultracalls (ucalls for short) allow the Secure Virtual Machines
(SVM)s and hypervisor to request services from the ultravisor such as
accessing a register or memory region that can only be accessed when
running in ultravisor-privileged mode.
This patch adds the
Hi Nick,
Nicholas Piggin writes:
> Santosh Sivaraj's on August 15, 2019 10:39 am:
>> From: Balbir Singh
>>
>> The current code would fail on huge pages addresses, since the shift would
>> be incorrect. Use the correct page shift value returned by
>> __find_linux_pte() to get the correct
Nicholas Piggin writes:
> Christophe Leroy's on August 14, 2019 6:11 am:
>> Until vmalloc system is up and running, ioremap basically
>> allocates addresses at the border of the IOREMAP area.
>>
>> On PPC32, addresses are allocated down from the top of the area
>> while on PPC64, addresses are
Christophe Leroy writes:
> diff --git a/arch/powerpc/mm/ioremap.c b/arch/powerpc/mm/ioremap.c
> index 57d742509cec..889ee656cf64 100644
> --- a/arch/powerpc/mm/ioremap.c
> +++ b/arch/powerpc/mm/ioremap.c
> @@ -103,3 +103,46 @@ void iounmap(volatile void __iomem *token)
> vunmap(addr);
> }
Christophe Leroy writes:
> Hi,
>
> Le 19/08/2019 à 18:37, Nathan Lynch a écrit :
>> Hi,
>>
>> Christophe Leroy writes:
>>> Benchmark from vdsotest:
>>
>> I assume you also ran the verification/correctness parts of vdsotest...? :-)
>>
>
> I did run vdsotest-all. I guess it runs the
> On Aug 18, 2019, at 8:58 PM, Daniel Axtens wrote:
>
>>> Each page of shadow memory represent 8 pages of real memory. Could we use
>>> page_ref to count how many pieces of a shadow page are used so that we can
>>> free it when the ref count decreases to 0.
>
> I'm not sure how much of a
Hi,
Le 19/08/2019 à 18:37, Nathan Lynch a écrit :
Hi,
Christophe Leroy writes:
Benchmark from vdsotest:
I assume you also ran the verification/correctness parts of vdsotest...? :-)
I did run vdsotest-all. I guess it runs the verifications too ?
Christophe
On Mon, Aug 19, 2019 at 2:32 AM Aneesh Kumar K.V
wrote:
>
> Aneesh Kumar K.V writes:
>
> > Dan Williams writes:
> >
> >> On Fri, Aug 9, 2019 at 12:45 AM Aneesh Kumar K.V
> >> wrote:
> >>>
> >>
>
> ...
>
> >>> diff --git a/drivers/nvdimm/pfn_devs.c b/drivers/nvdimm/pfn_devs.c
> >>> index
Le 19/08/2019 à 19:46, David Sterba a écrit :
On Sat, Aug 17, 2019 at 07:44:39AM +, Christophe Leroy wrote:
Various notifications of type "BUG kmalloc-4096 () : Redzone
overwritten" have been observed recently in various parts of
the kernel. After some time, it has been made a relation
On Sat, Aug 17, 2019 at 07:44:39AM +, Christophe Leroy wrote:
> Various notifications of type "BUG kmalloc-4096 () : Redzone
> overwritten" have been observed recently in various parts of
> the kernel. After some time, it has been made a relation with
> the use of BTRFS filesystem.
>
> [
On Mon, Aug 19, 2019 at 09:28:03AM -0700, Kees Cook wrote:
> On Mon, Aug 19, 2019 at 01:06:28PM +, Christophe Leroy wrote:
> > __WARN() used to just call __WARN_TAINT(TAINT_WARN)
> >
> > But a call to printk() has been added in the commit identified below
> > to print a " cut here "
On Mon, Aug 19, 2019 at 12:07 AM Aneesh Kumar K.V
wrote:
>
> Dan Williams writes:
>
> > On Tue, Aug 13, 2019 at 9:22 PM Dan Williams
> > wrote:
> >>
> >> Hi Aneesh, logic looks correct but there are some cleanups I'd like to
> >> see and a lead-in patch that I attached.
> >>
> >> I've started
Hi,
Christophe Leroy writes:
> Benchmark from vdsotest:
I assume you also ran the verification/correctness parts of vdsotest...? :-)
On Mon, Aug 19, 2019 at 01:06:28PM +, Christophe Leroy wrote:
> __WARN() used to just call __WARN_TAINT(TAINT_WARN)
>
> But a call to printk() has been added in the commit identified below
> to print a " cut here " line.
>
> This change only applies to warnings using __WARN(), which
On 14/08/19 3:51 PM, Mahesh Jagannath Salgaonkar wrote:
> On 8/14/19 12:36 PM, Hari Bathini wrote:
>>
>>
>> On 13/08/19 4:11 PM, Mahesh J Salgaonkar wrote:
>>> On 2019-07-16 17:03:15 Tue, Hari Bathini wrote:
OPAL allows registering address with it in the first kernel and
retrieving it
On Mon, Aug 19, 2019 at 05:05:46PM +0200, Christophe Leroy wrote:
> Le 19/08/2019 à 16:37, Segher Boessenkool a écrit :
> >On Mon, Aug 19, 2019 at 04:08:43PM +0200, Christophe Leroy wrote:
> >>Le 19/08/2019 à 15:23, Segher Boessenkool a écrit :
> >>>On Mon, Aug 19, 2019 at 01:06:31PM +,
Le 19/08/2019 à 16:37, Segher Boessenkool a écrit :
On Mon, Aug 19, 2019 at 04:08:43PM +0200, Christophe Leroy wrote:
Le 19/08/2019 à 15:23, Segher Boessenkool a écrit :
On Mon, Aug 19, 2019 at 01:06:31PM +, Christophe Leroy wrote:
Note that we keep using an assembly text using "twi
Segher Boessenkool's on August 20, 2019 12:24 am:
> On Mon, Aug 19, 2019 at 01:58:12PM +, Christophe Leroy wrote:
>> -#define LOAD_REG_IMMEDIATE_SYM(reg,expr)\
>> -lis reg,(expr)@highest; \
>> -ori reg,reg,(expr)@higher; \
>> -rldicr reg,reg,32,31;
On Sun, Aug 18, 2019 at 10:23 PM Christoph Hellwig wrote:
>
> On Fri, Aug 16, 2019 at 02:04:35PM -0700, Rob Clark wrote:
> > I don't disagree about needing an API to get uncached memory (or
> > ideally just something outside of the linear map). But I think this
> > is a separate problem.
> >
> >
On Mon, Aug 19, 2019 at 04:08:43PM +0200, Christophe Leroy wrote:
> Le 19/08/2019 à 15:23, Segher Boessenkool a écrit :
> >On Mon, Aug 19, 2019 at 01:06:31PM +, Christophe Leroy wrote:
> >>Note that we keep using an assembly text using "twi 31, 0, 0" for
> >>inconditional traps because GCC
Santosh Sivaraj's on August 15, 2019 10:39 am:
> From: Balbir Singh
>
> If we take a UE on one of the instructions with a fixup entry, set nip
> to continue execution at the fixup entry. Stop processing the event
> further or print it.
The previous patch added these fixup entries and now you
Does my explanation from Thursday make sense or is it completely
off? Does the patch description need some update to be less
confusing to those used to different terminology?
On Thu, Aug 15, 2019 at 12:50:02PM +0200, Christoph Hellwig wrote:
> Except for the different naming scheme vs the code
On Mon, Aug 19, 2019 at 01:58:12PM +, Christophe Leroy wrote:
> -#define LOAD_REG_IMMEDIATE_SYM(reg,expr) \
> - lis reg,(expr)@highest; \
> - ori reg,reg,(expr)@higher; \
> - rldicr reg,reg,32,31; \
> - orisreg,reg,(expr)@__AS_ATHIGH;
Santosh Sivaraj's on August 15, 2019 10:39 am:
> From: Balbir Singh
>
> The current code would fail on huge pages addresses, since the shift would
> be incorrect. Use the correct page shift value returned by
> __find_linux_pte() to get the correct physical address. The code is more
> generic and
Hi Christophe,
On Mon, Aug 19, 2019 at 01:58:10PM +, Christophe Leroy wrote:
> +.macro __LOAD_REG_IMMEDIATE r, x
> + .if (\x) >= 0x8000 || (\x) < -0x8000
> + __LOAD_REG_IMMEDIATE_32 \r, (\x) >> 32
> + sldi\r, \r, 32
> + .if (\x) & 0x
Le 19/08/2019 à 15:23, Segher Boessenkool a écrit :
On Mon, Aug 19, 2019 at 01:06:31PM +, Christophe Leroy wrote:
Note that we keep using an assembly text using "twi 31, 0, 0" for
inconditional traps because GCC drops all code after
__builtin_trap() when the condition is always true at
Santosh Sivaraj's on August 15, 2019 10:39 am:
> schedule_work() cannot be called from MCE exception context as MCE can
> interrupt even in interrupt disabled context.
The powernv code doesn't do this in general, rather defers kernel
MCEs. My patch series converts the pseries machine check
Optimise LOAD_REG_IMMEDIATE_SYM() using a temporary register to
parallelise operations.
It reduces the path from 5 to 3 instructions.
Suggested-by: Segher Boessenkool
Signed-off-by: Christophe Leroy
---
v3: new
---
arch/powerpc/include/asm/ppc_asm.h | 12 ++--
LOAD_MSR_KERNEL() and LOAD_REG_IMMEDIATE() are doing the same thing
in the same way. Drop LOAD_MSR_KERNEL()
Signed-off-by: Christophe Leroy
---
v2: no change
v3: no change
---
arch/powerpc/kernel/entry_32.S | 18 +-
arch/powerpc/kernel/head_32.h | 21 -
2
Today LOAD_REG_IMMEDIATE() is a basic #define which loads all
parts on a value into a register, including the parts that are NUL.
This means always 2 instructions on PPC32 and always 5 instructions
on PPC64. And those instructions cannot run in parallele as they are
updating the same register.
Christophe Leroy's on August 14, 2019 4:31 pm:
> Hi Nick,
>
>
> Le 07/06/2018 à 03:43, Nicholas Piggin a écrit :
>> On Wed, 6 Jun 2018 14:21:08 + (UTC)
>> Christophe Leroy wrote:
>>
>>> scaled cputime is only meaningfull when the processor has
>>> SPURR and/or PURR, which means only on
Christophe Leroy's on August 14, 2019 6:11 am:
> Until vmalloc system is up and running, ioremap basically
> allocates addresses at the border of the IOREMAP area.
>
> On PPC32, addresses are allocated down from the top of the area
> while on PPC64, addresses are allocated up from the base of the
On Mon, Aug 19, 2019 at 01:06:31PM +, Christophe Leroy wrote:
> Note that we keep using an assembly text using "twi 31, 0, 0" for
> inconditional traps because GCC drops all code after
> __builtin_trap() when the condition is always true at build time.
As I said, it can also do this for
Michael Ellerman's on August 17, 2019 8:25 am:
> kbuild test robot writes:
>> Hi Nicholas,
>>
>> I love your patch! Yet something to improve:
>>
>> [auto build test ERROR on linus/master]
>> [cannot apply to v5.3-rc3 next-20190807]
>> [if your patch is applied to the wrong git tree, please drop
Michael Ellerman's on August 18, 2019 1:49 pm:
> Nicholas Piggin writes:
>> diff --git a/arch/powerpc/kernel/exceptions-64s.S
>> b/arch/powerpc/kernel/exceptions-64s.S
>> index eee5bef736c8..64d5ffbb07d1 100644
>> --- a/arch/powerpc/kernel/exceptions-64s.S
>> +++
Michael Ellerman's on August 19, 2019 12:00 pm:
> Nicholas Piggin writes:
>> Rather than sprinkle various translation structure invalidations
>> around different places in early boot, have each CPU flush everything
>> from its local translation structures before enabling its MMU.
>>
>> Radix
The below exemples of use of WARN_ON() show that the result
is sub-optimal in regard of the capabilities of powerpc.
void test_warn1(unsigned long long a)
{
WARN_ON(a);
}
void test_warn2(unsigned long a)
{
WARN_ON(a);
}
void test_warn3(unsigned long a, unsigned long b)
{
__WARN() used to just call __WARN_TAINT(TAINT_WARN)
But a call to printk() has been added in the commit identified below
to print a " cut here " line.
This change only applies to warnings using __WARN(), which means
WARN_ON() where the condition is constant at compile time.
For WARN_ON()
BUG(), WARN() and friends are using a similar inline
assembly to implement various traps with various flags.
Lets refactor via a new BUG_ENTRY() macro.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/bug.h | 41 +++--
1 file changed, 15
Christophe Leroy writes:
> diff --git a/arch/powerpc/mm/ioremap.c b/arch/powerpc/mm/ioremap.c
> index 0c23660522ca..57d742509cec 100644
> --- a/arch/powerpc/mm/ioremap.c
> +++ b/arch/powerpc/mm/ioremap.c
> @@ -72,3 +75,31 @@ void __iomem *ioremap_prot(phys_addr_t addr, unsigned long
> size,
IMA subsystem supports custom, built-in, arch-specific policies to define
the files to be measured and appraised. These policies are honored based
on the priority where arch-specific policies is the highest and custom
is the lowest.
OpenPOWER systems rely on IMA for signature verification of the
POWER secure boot relies on the kernel IMA security subsystem to
perform the OS kernel image signature verification. Since each secure
boot mode has different IMA policy requirements, dynamic definition of
the policy rules based on the runtime secure boot mode of the system is
required. On systems
Secure boot on POWER defines different IMA policies based on the secure
boot state of the system.
This patch defines a function to detect the secure boot state of the
system.
The PPC_SECURE_BOOT config represents the base enablement of secureboot
on POWER.
Signed-off-by: Nayna Jain
---
Le 19/08/2019 à 08:28, Daniel Axtens a écrit :
In KASAN development I noticed that the powerpc-specific bitops
were not being picked up by the KASAN test suite.
I'm not sure anybody cares about who noticed the problem. This sentence
could be rephrased as:
The powerpc-specific bitops are
Le 19/08/2019 à 08:28, Daniel Axtens a écrit :
Currently bitops-instrumented.h assumes that the architecture provides
atomic, non-atomic and locking bitops (e.g. both set_bit and __set_bit).
This is true on x86 and s390, but is not always true: there is a
generic bitops/non-atomic.h header
On Wed, 2019-08-14 at 20:42 -0700, Bart Van Assche wrote:
> On 8/14/19 10:18 AM, Abdul Haleem wrote:
> > On Wed, 2019-08-14 at 10:05 -0700, Bart Van Assche wrote:
> >> On 8/14/19 9:52 AM, Abdul Haleem wrote:
> >>> Greeting's
> >>>
> >>> Today's linux-next kernel (5.3.0-rc4-next-20190813) booted
On Fri, Aug 16, 2019 at 10:41:00AM -0700, Andy Lutomirski wrote:
> On Fri, Aug 16, 2019 at 10:08 AM Mark Rutland wrote:
> >
> > Hi Christophe,
> >
> > On Fri, Aug 16, 2019 at 09:47:00AM +0200, Christophe Leroy wrote:
> > > Le 15/08/2019 à 02:16, Daniel Axtens a écrit :
> > > > Hook into vmalloc
Aneesh Kumar K.V writes:
> Dan Williams writes:
>
>> On Fri, Aug 9, 2019 at 12:45 AM Aneesh Kumar K.V
>> wrote:
>>>
>>
...
>>> diff --git a/drivers/nvdimm/pfn_devs.c b/drivers/nvdimm/pfn_devs.c
>>> index 37e96811c2fc..c1d9be609322 100644
>>> --- a/drivers/nvdimm/pfn_devs.c
>>> +++
On Sun, Aug 18, 2019 at 12:13:21PM -0700, Nathan Chancellor wrote:
> When building pseries_defconfig, building vdso32 errors out:
>
> error: unknown target ABI 'elfv1'
>
> Commit 4dc831aa8813 ("powerpc: Fix compiling a BE kernel with a
> powerpc64le toolchain") added these flags to fix
Hi Russell,
On Tue, Aug 06, 2019 at 06:00:15PM +0800, Leo Yan wrote:
> This patch implements arm specific functions regs_set_return_value() and
> override_function_with_return() to support function error injection.
>
> In the exception flow, it updates pt_regs::ARM_pc with pt_regs::ARM_lr
> so
On Monday, August 19, 2019 10:33:25 AM CEST Ran Wang wrote:
> Hi Rafael,
>
> On Monday, August 19, 2019 16:20, Rafael J. Wysocki wrote:
> >
> > On Mon, Aug 19, 2019 at 10:15 AM Ran Wang wrote:
> > >
> > > Hi Rafael,
> > >
> > > On Monday, August 05, 2019 17:59, Rafael J. Wysocki wrote:
> > > >
Hi Rafael,
On Monday, August 19, 2019 16:20, Rafael J. Wysocki wrote:
>
> On Mon, Aug 19, 2019 at 10:15 AM Ran Wang wrote:
> >
> > Hi Rafael,
> >
> > On Monday, August 05, 2019 17:59, Rafael J. Wysocki wrote:
> > >
> > > On Wednesday, July 24, 2019 9:47:20 AM CEST Ran Wang wrote:
> > > > Some
On Mon, Aug 19, 2019 at 10:15 AM Ran Wang wrote:
>
> Hi Rafael,
>
> On Monday, August 05, 2019 17:59, Rafael J. Wysocki wrote:
> >
> > On Wednesday, July 24, 2019 9:47:20 AM CEST Ran Wang wrote:
> > > Some user might want to go through all registered wakeup sources and
> > > doing things
Hi Drew,
I recently noticed gcc suddenly generating ugly code for WARN_ON(1).
It looks like commit 6b15f678fb7d ("include/asm-generic/bug.h: fix "cut
here" for WARN_ON for __WARN_TAINT architectures") is the culprit.
unsigned long test_mul1(unsigned long a, unsigned long b)
{
unsigned
Hi Rafael,
On Monday, August 05, 2019 17:59, Rafael J. Wysocki wrote:
>
> On Wednesday, July 24, 2019 9:47:20 AM CEST Ran Wang wrote:
> > Some user might want to go through all registered wakeup sources and
> > doing things accordingly. For example, SoC PM driver might need to do
> > HW
On Mon, Aug 19, 2019 at 07:40:42AM +0200, Christophe Leroy wrote:
> Le 18/08/2019 à 14:01, Segher Boessenkool a écrit :
> >On Sat, Aug 17, 2019 at 09:04:42AM +, Christophe Leroy wrote:
> >>Unlike BUG_ON(x), WARN_ON(x) uses !!(x) as the trigger
> >>of the t(d/w)nei instruction instead of using
Hi Nathan,
> When building pseries_defconfig, building vdso32 errors out:
>
> error: unknown target ABI 'elfv1'
>
> Commit 4dc831aa8813 ("powerpc: Fix compiling a BE kernel with a
> powerpc64le toolchain") added these flags to fix building GCC but
> clang is multitargeted and does not need
Dan Williams writes:
> On Fri, Aug 9, 2019 at 12:45 AM Aneesh Kumar K.V
> wrote:
>>
>> Use PAGE_SIZE instead of SZ_4K and sizeof(struct page) instead of 64.
>> If we have a kernel built with different struct page size the previous
>> patch should handle marking the namespace disabled.
>
> Each
Dan Williams writes:
> On Tue, Aug 13, 2019 at 9:22 PM Dan Williams wrote:
>>
>> Hi Aneesh, logic looks correct but there are some cleanups I'd like to
>> see and a lead-in patch that I attached.
>>
>> I've started prefixing nvdimm patches with:
>>
>> libnvdimm/$component:
>>
>> ...since
If a page is already mapped RW without the DIRTY flag, the DIRTY
flag is never set and a TLB store miss exception is taken forever.
This is easily reproduced with the following app:
void main(void)
{
volatile char *ptr = mmap(0, 4096, PROT_READ | PROT_WRITE, MAP_SHARED |
MAP_ANONYMOUS,
In KASAN development I noticed that the powerpc-specific bitops
were not being picked up by the KASAN test suite.
Instrumentation is done via the bitops/instrumented-{atomic,lock}.h
headers. They require that arch-specific versions of bitop functions
are renamed to arch_*. Do this renaming.
For
Currently bitops-instrumented.h assumes that the architecture provides
atomic, non-atomic and locking bitops (e.g. both set_bit and __set_bit).
This is true on x86 and s390, but is not always true: there is a
generic bitops/non-atomic.h header that provides generic non-atomic
operations, and also
Hi Michael,
Is there anything more I should do to get this feature meeting the
requirements of the mainline?
Thanks,
Jason
On 2019/8/9 18:07, Jason Yan wrote:
This series implements KASLR for powerpc/fsl_booke/32, as a security
feature that deters exploit attempts relying on knowledge of
96 matches
Mail list logo