Re: [PATCH v11 4/5] powerpc/fsl: move mpc85xx.h to include/linux/fsl

2016-07-26 Thread Scott Wood
On Mon, 2016-07-25 at 06:12 +, Yangbo Lu wrote:
> Hi Scott,
> 
> 
> > 
> > -Original Message-
> > From: Scott Wood [mailto:o...@buserror.net]
> > Sent: Friday, July 22, 2016 12:45 AM
> > To: Michael Ellerman; Arnd Bergmann
> > Cc: linux-...@vger.kernel.org; devicet...@vger.kernel.org; linuxppc-
> > d...@lists.ozlabs.org; linux-ker...@vger.kernel.org; Yangbo Lu
> > Subject: Re: [PATCH v11 4/5] powerpc/fsl: move mpc85xx.h to
> > include/linux/fsl
> > 
> > On Thu, 2016-07-21 at 20:26 +1000, Michael Ellerman wrote:
> > > 
> > > Quoting Scott Wood (2016-07-21 04:31:48)
> > > > 
> > > > 
> > > > On Wed, 2016-07-20 at 13:24 +0200, Arnd Bergmann wrote:
> > > > > 
> > > > > 
> > > > > On Saturday, July 16, 2016 9:50:21 PM CEST Scott Wood wrote:
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > From: yangbo lu 
> > > > > > 
> > > > > > Move mpc85xx.h to include/linux/fsl and rename it to svr.h as a
> > > > > > common header file.  This SVR numberspace is used on some ARM
> > > > > > chips as well as PPC, and even to check for a PPC SVR multi-arch
> > > > > > drivers would otherwise need to ifdef the header inclusion and
> > > > > > all references to the SVR symbols.
> > > > > > 
> > > > > > Signed-off-by: Yangbo Lu 
> > > > > > Acked-by: Wolfram Sang 
> > > > > > Acked-by: Stephen Boyd 
> > > > > > Acked-by: Joerg Roedel 
> > > > > > [scottwood: update description]
> > > > > > Signed-off-by: Scott Wood 
> > > > > > 
> > > > > As discussed before, please don't introduce yet another vendor
> > > > > specific way to match a SoC ID from a device driver.
> > > > > 
> > > > > I've posted a patch for an extension to the soc_device
> > > > > infrastructure to allow comparing the running SoC to a table of
> > > > > devices, use that instead.
> > > > As I asked before, in which relevant maintainership capacity are you
> > > > NACKing this?
> > > I'll nack the powerpc part until you guys can agree.
> > OK, I've pulled these patches out.
> > 
> > For the MMC issue I suggest using ifdef CONFIG_PPC and mfspr(SPRN_SVR)
> > like the clock driver does[1] and we can revisit the issue if/when we
> > need to do something similar on an ARM chip.
> [Lu Yangbo-B47093] I remembered that Uffe had opposed us to introduce non-
> generic header files(like '#include ')
> in mmc driver initially. So I think it will not be accepted to use ifdef
> CONFIG_PPC and mfspr(SPRN_SVR)...
> And this method still couldn’t get SVR of ARM chip now.

Right, as I said we'll have to revisit the issue if/when we have the same
problem on an ARM chip.  That also applies if the PPC ifdef is still getting
NACKed from the MMC side.

> Any other suggestion here?

The other option is to try to come up with something that fits into Arnd's
framework while addressing the concerns I raised.  The soc_id string should be
well-structured to avoid mismatches and compatibility problems (especially
since it would get exposed to userspace).  Maybe something like:

svr:,svre:,name:,die:,rev:X.Y,,,<...>,

with the final comma used so that globs can put a colon on either end to be
sure they're matching a full field.  The SoC die name would be the primary
chip for a given die (e.g. p4040 would have a die name of p4080).  The "name"
and "die" fields would never include the trailing "e" indicated by the E bit.

Extra tags could be used for common groupings, such as all chips from a
particular die before a certain revision.  Once a tag is added it can't be
removed or reordered, to maintain userspace compatibility, but new tags could
be appended.

Some examples:

svr:0x8220,svre:0x8220,name:p4080,die:p4080,rev:2.0,
svr:0x8220,svr
e:0x82080020,name:p4080,die:p4080,rev:2.0,
svr:0x8230,svre:0x8230,name:
p4080,die:p4080,rev:3.0,
svr:0x8230,svre:0x82080030,name:p4080,die:p4080,re
v:3.0,
svr:0x82010020,svre:0x82010020,name:p4040,die:p4080,rev:2.0,
svr:0x820100
20,svre:0x82090020,name:p4040,die:p4080,rev:2.0,

svr:0x82010030,svre:0x82010030
,name:p4040,die:p4080,rev:3.0,
svr:0x82010030,svre:0x82090030,name:p4040,die:p4
080,rev:3.0,

Then if you want to apply a workaround on:
- all chips using the p4080 die, match with "*,die:p4080,*"
- all chips using the rev 2.0 p4080 die, match with "*,die:p4080,rev:2.0,*"
- Only p4040, but of any rev, match with "*,name:p4040,*"

Matching via open-coded hex number should be considered a last resort (it's
more error-prone, either for getting the number wrong or for forgetting
variants -- the latter is already a common problem), but preferable to adding
too many tags.

Using wildcards within a tag field would be discouraged.  

-Scott

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH 2/3] powerpc/mm: Rename hpte_init_lpar() & put fallback in a header

2016-07-26 Thread Michael Ellerman
Benjamin Herrenschmidt  writes:

> On Mon, 2016-07-25 at 20:36 +1000, Michael Ellerman wrote:
>> That would be nice, but these look fishy at least:
>> 
>> arch/powerpc/platforms/cell/spu_manage.c:   if 
>> (!firmware_has_feature(FW_FEATURE_LPAR))
>> arch/powerpc/platforms/cell/spu_manage.c:   if 
>> (!firmware_has_feature(FW_FEATURE_LPAR)) {
>> > arch/powerpc/platforms/cell/spu_manage.c:   if 
>> > (!firmware_has_feature(FW_FEATURE_LPAR))
>
> Those can just be checks for LV1, I think .. 

Yeah they can now.

Previously we had BEAT (the celleb HV), but now that's gone we can
switch those to LV1.

>> > arch/powerpc/platforms/pasemi/iommu.c:  
>> > !firmware_has_feature(FW_FEATURE_LPAR)) {
>> drivers/net/ethernet/pasemi/pasemi_mac.c:   return 
>> firmware_has_feature(FW_FEATURE_LPAR);
>
> And that was some experiemtal PAPR'ish thing wasn't it ?

Not sure, it was news to me that pasemi ever had any HV support.

cheers
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

[PATCH v2 3/3] kexec: extend kexec_file_load system call

2016-07-26 Thread Thiago Jung Bauermann
Device tree blob must be passed to a second kernel on DTB-capable
archs, like powerpc and arm64, but the current kernel interface
lacks this support.

This patch extends kexec_file_load system call by adding an extra
argument to this syscall so that an arbitrary number of file descriptors
can be handed out from user space to the kernel.

long sys_kexec_file_load(int kernel_fd, int initrd_fd,
 unsigned long cmdline_len,
 const char __user *cmdline_ptr,
 unsigned long flags,
 const struct kexec_fdset __user *ufdset);

If KEXEC_FILE_EXTRA_FDS is set to the "flags" argument, the "ufdset"
argument points to the following struct buffer:

struct kexec_fdset {
int nr_fds;
struct kexec_file_fd fds[0];
}

Signed-off-by: AKASHI Takahiro 
Signed-off-by: Thiago Jung Bauermann 
---

Notes:
This is a new version of the last patch in this series which adds
a function where each architecture can verify if the DTB is safe
to load:

int __weak arch_kexec_verify_buffer(enum kexec_file_type type,
const void *buf,
unsigned long size)
{
return -EINVAL;
}

I will then provide an implementation in my powerpc patch series
which checks that the DTB only contains nodes and properties from a
whitelist. arch_kexec_kernel_image_load will copy these properties
to the device tree blob the kernel was booted with (and perform
other changes such as setting /chosen/bootargs, of course).

I made the following additional changes:
- renamed KEXEC_FILE_TYPE_DTB to KEXEC_FILE_TYPE_PARTIAL_DTB,
- limited max number of fds to KEXEC_SEGMENT_MAX,
- changed to use fixed size buffer for fdset instead of allocating it,
- changed to return -EINVAL if an unknown file type is found in fdset.

 include/linux/fs.h |  1 +
 include/linux/kexec.h  |  7 ++--
 include/linux/syscalls.h   |  4 ++-
 include/uapi/linux/kexec.h | 22 
 kernel/kexec_file.c| 83 ++
 5 files changed, 108 insertions(+), 9 deletions(-)

diff --git a/include/linux/fs.h b/include/linux/fs.h
index dd288148a6b1..5e0ee342b457 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -2634,6 +2634,7 @@ extern int do_pipe_flags(int *, int);
id(MODULE, kernel-module)   \
id(KEXEC_IMAGE, kexec-image)\
id(KEXEC_INITRAMFS, kexec-initramfs)\
+   id(KEXEC_PARTIAL_DTB, kexec-partial-dtb)\
id(POLICY, security-policy) \
id(MAX_ID, )
 
diff --git a/include/linux/kexec.h b/include/linux/kexec.h
index 554c8480dba3..b7eec336e935 100644
--- a/include/linux/kexec.h
+++ b/include/linux/kexec.h
@@ -146,7 +146,10 @@ struct kexec_file_ops {
kexec_verify_sig_t *verify_sig;
 #endif
 };
-#endif
+
+int __weak arch_kexec_verify_buffer(enum kexec_file_type type, const void *buf,
+   unsigned long size);
+#endif /* CONFIG_KEXEC_FILE */
 
 struct kimage {
kimage_entry_t head;
@@ -277,7 +280,7 @@ extern int kexec_load_disabled;
 
 /* List of defined/legal kexec file flags */
 #define KEXEC_FILE_FLAGS   (KEXEC_FILE_UNLOAD | KEXEC_FILE_ON_CRASH | \
-KEXEC_FILE_NO_INITRAMFS)
+KEXEC_FILE_NO_INITRAMFS | KEXEC_FILE_EXTRA_FDS)
 
 #define VMCOREINFO_BYTES   (4096)
 #define VMCOREINFO_NOTE_NAME   "VMCOREINFO"
diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
index d02239022bd0..fc072bdb74e3 100644
--- a/include/linux/syscalls.h
+++ b/include/linux/syscalls.h
@@ -66,6 +66,7 @@ struct perf_event_attr;
 struct file_handle;
 struct sigaltstack;
 union bpf_attr;
+struct kexec_fdset;
 
 #include 
 #include 
@@ -321,7 +322,8 @@ asmlinkage long sys_kexec_load(unsigned long entry, 
unsigned long nr_segments,
 asmlinkage long sys_kexec_file_load(int kernel_fd, int initrd_fd,
unsigned long cmdline_len,
const char __user *cmdline_ptr,
-   unsigned long flags);
+   unsigned long flags,
+   const struct kexec_fdset __user *ufdset);
 
 asmlinkage long sys_exit(int error_code);
 asmlinkage long sys_exit_group(int error_code);
diff --git a/include/uapi/linux/kexec.h b/include/uapi/linux/kexec.h
index 99048e501b88..32e0cefe2000 100644
--- a/include/uapi/linux/kexec.h
+++ b/include/uapi/linux/kexec.h
@@ -23,6 +23,28 @@
 #define KEXEC_FILE_UNLOAD  0x0001
 #define KEXEC_FILE_ON_CRASH0x0002
 #define KEXEC_FILE_NO_INITRAMFS0x0004
+#define 

Re: [patch] ide: missing break statement in set_timings_mdma()

2016-07-26 Thread David Miller
From: Dan Carpenter 
Date: Thu, 14 Jul 2016 13:48:02 +0300

> There was clearly supposed to be a break statement here.  Currently we
> use the k2 ata timings instead of sh ata ones we intended.  Probably no
> one has this hardware anymore so it likely doesn't make a difference
> beyond the static checker warning.
> 
> Signed-off-by: Dan Carpenter 

Applied.
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH 3/3] powerpc: Convert fsl_rstcr_restart to a reset handler

2016-07-26 Thread Scott Wood
On Tue, 2016-07-26 at 14:22 -0700, Andrey Smirnov wrote:
> On Tue, Jul 26, 2016 at 12:59 AM, Scott Wood  wrote:
> > 
> > On Mon, 2016-07-25 at 21:25 -0700, Andrey Smirnov wrote:
> > > 
> > > Convert fsl_rstcr_restart into a function to be registered with
> > > register_reset_handler() API and introduce fls_rstcr_restart_register()
> > > function that can be added as an initcall that would do aforementioned
> > > registration.
> > > 
> > > Signed-off-by: Andrey Smirnov 
> > Is there a particular motivation for this (e.g. new handlers you plan to
> > register elsewhere)?
> I have a MPC8548 based board that which uses, at least for time being,
> SBC8548's init code(by claiming compatibility in DT) which has an
> external watchdog that implements reset functionality. The driver for
> watchdog is just a generic watchdog driver and having an ability to
> register custom reset handlers is very handy.
> 
> I don't really have any motivation for fixing boards other than
> SBC8548 and even that I can avoid doing by making a new custom board
> file in my tree that would not populate .reset field. I can drop this
> patch from the series if the code of those boards is in "don't touch
> it unless absolutely have to" state.

I'm not saying not to touch it -- I just wanted to understand the context.

-Scott

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH 3/3] powerpc: Convert fsl_rstcr_restart to a reset handler

2016-07-26 Thread Andrey Smirnov
On Tue, Jul 26, 2016 at 12:59 AM, Scott Wood  wrote:
> On Mon, 2016-07-25 at 21:25 -0700, Andrey Smirnov wrote:
>> Convert fsl_rstcr_restart into a function to be registered with
>> register_reset_handler() API and introduce fls_rstcr_restart_register()
>> function that can be added as an initcall that would do aforementioned
>> registration.
>>
>> Signed-off-by: Andrey Smirnov 
>
> Is there a particular motivation for this (e.g. new handlers you plan to
> register elsewhere)?

I have a MPC8548 based board that which uses, at least for time being,
SBC8548's init code(by claiming compatibility in DT) which has an
external watchdog that implements reset functionality. The driver for
watchdog is just a generic watchdog driver and having an ability to
register custom reset handlers is very handy.

I don't really have any motivation for fixing boards other than
SBC8548 and even that I can avoid doing by making a new custom board
file in my tree that would not populate .reset field. I can drop this
patch from the series if the code of those boards is in "don't touch
it unless absolutely have to" state.

>
>> diff --git a/arch/powerpc/platforms/85xx/bsc913x_qds.c
>> b/arch/powerpc/platforms/85xx/bsc913x_qds.c
>> index 07dd6ae..14ea7a0 100644
>> --- a/arch/powerpc/platforms/85xx/bsc913x_qds.c
>> +++ b/arch/powerpc/platforms/85xx/bsc913x_qds.c
>> @@ -53,6 +53,7 @@ static void __init bsc913x_qds_setup_arch(void)
>>  }
>>
>>  machine_arch_initcall(bsc9132_qds, mpc85xx_common_publish_devices);
>> +machine_arch_initcall(bsc9133_qds, fsl_rstcr_restart_register);
>
> Do we really still need to call the registration on a per-board basis, now
> that boards have a way of registering a higher-priority notifier?  Can't we
> just have setup_rstcr() do the registration when it finds the appropriate
> device tree node?

I think we could, that idea just never occurred to me. What you
describe should be a cleaner way to handle this change, I'll convert
the code to do that in v2.

>
>> +int fsl_rstcr_restart_register(void)
>> +{
>> + static struct notifier_block restart_handler;
>> +
>> + restart_handler.notifier_call = fsl_rstcr_restart;
>> + restart_handler.priority = 128;
>> +
>> + return register_restart_handler(_handler);
>> +}
>> +EXPORT_SYMBOL(fsl_rstcr_restart_register);
>
> When would this ever get called from a module?

Probably never, that's just a mistake on my part. Will remove in v2.

Thanks,
Andrey
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

_PAGE_PRESENT and _PAGE_ACCESSED

2016-07-26 Thread LEROY Christophe
In ppc8xx tlbmiss handler, we consider a page valid if both  
_PAGE_PRESENT and _PAGE_ACCESSED are set.

Is there any chance to have _PAGE_ACCESSED set and not _PAGE_PRESENT ?
Otherwise we could simplify the handler by considering the page valid  
only when _PAGE_ACCESSED is set


Christophe
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

[PATCH 6/6] powerpc/kernel: Check features don't change after patching

2016-07-26 Thread Michael Ellerman
Early in boot we binary patch some sections of code based on the CPU and
MMU feature bits. But it is a one-time patching, there is no facility
for repatching the code later if the set of features change.

It is a major bug if the set of features changes after we've done the
code patching - so add a check for it.

Signed-off-by: Michael Ellerman 
---
 arch/powerpc/lib/feature-fixups.c | 27 ++-
 1 file changed, 26 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/lib/feature-fixups.c 
b/arch/powerpc/lib/feature-fixups.c
index defb2998b818..854b8ba40f8e 100644
--- a/arch/powerpc/lib/feature-fixups.c
+++ b/arch/powerpc/lib/feature-fixups.c
@@ -152,10 +152,19 @@ static void do_final_fixups(void)
 #endif
 }
 
-void apply_feature_fixups(void)
+static unsigned long __initdata saved_cpu_features;
+static unsigned int __initdata saved_mmu_features;
+#ifdef CONFIG_PPC64
+static unsigned long __initdata saved_firmware_features;
+#endif
+
+void __init apply_feature_fixups(void)
 {
struct cpu_spec *spec = *PTRRELOC(_cpu_spec);
 
+   saved_cpu_features = spec->cpu_features;
+   saved_mmu_features = spec->mmu_features;
+
/*
 * Apply the CPU-specific and firmware specific fixups to kernel text
 * (nop out sections not relevant to this CPU or this firmware).
@@ -173,12 +182,28 @@ void apply_feature_fixups(void)
 PTRRELOC(&__stop___lwsync_fixup));
 
 #ifdef CONFIG_PPC64
+   saved_firmware_features = powerpc_firmware_features;
do_feature_fixups(powerpc_firmware_features,
  &__start___fw_ftr_fixup, &__stop___fw_ftr_fixup);
 #endif
do_final_fixups();
 }
 
+static int __init check_features(void)
+{
+   WARN(saved_cpu_features != cur_cpu_spec->cpu_features,
+"CPU features changed after feature patching!\n");
+   WARN(saved_mmu_features != cur_cpu_spec->mmu_features,
+"MMU features changed after feature patching!\n");
+#ifdef CONFIG_PPC64
+   WARN(saved_firmware_features != powerpc_firmware_features,
+"Firmware features changed after feature patching!\n");
+#endif
+
+   return 0;
+}
+late_initcall(check_features);
+
 #ifdef CONFIG_FTR_FIXUP_SELFTEST
 
 #define check(x)   \
-- 
2.7.4

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

[PATCH 5/6] powerpc/64: Do feature patching before MMU init

2016-07-26 Thread Michael Ellerman
Up until now we needed to do the MMU init before feature patching,
because part of the MMU init was scanning the device tree and setting
and/or clearing some MMU feature bits.

Now that we have split that MMU feature modification out into routines
called from early_init_devtree() (called earlier) we can now do feature
patching before calling MMU init.

The advantage of this is it means the remainder of the MMU init runs
with the final set of features which will apply for the rest of the life
of the system. This means we don't have to special case anything called
from MMU init to deal with a changing set of feature bits.

Signed-off-by: Michael Ellerman 
---
 arch/powerpc/kernel/setup_64.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index d8216aed22b7..984696136f96 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -298,12 +298,12 @@ void __init early_setup(unsigned long dt_ptr)
 */
configure_exceptions();
 
-   /* Initialize the hash table or TLB handling */
-   early_init_mmu();
-
/* Apply all the dynamic patching */
apply_feature_fixups();
 
+   /* Initialize the hash table or TLB handling */
+   early_init_mmu();
+
/*
 * At this point, we can let interrupts switch to virtual mode
 * (the MMU has been setup), so adjust the MSR in the PACA to
-- 
2.7.4

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

[PATCH 4/6] powerpc/mm: Do radix device tree scanning earlier

2016-07-26 Thread Michael Ellerman
Like we just did for hash, split the device tree scanning parts out and
call them from mmu_early_init_devtree().

Signed-off-by: Michael Ellerman 
---
 arch/powerpc/include/asm/book3s/64/mmu.h | 1 +
 arch/powerpc/mm/init_64.c| 4 +++-
 arch/powerpc/mm/pgtable-radix.c  | 3 +--
 3 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h 
b/arch/powerpc/include/asm/book3s/64/mmu.h
index 358f1410dc0d..9ee00c2576d0 100644
--- a/arch/powerpc/include/asm/book3s/64/mmu.h
+++ b/arch/powerpc/include/asm/book3s/64/mmu.h
@@ -109,6 +109,7 @@ extern int mmu_io_psize;
 /* MMU initialization */
 void mmu_early_init_devtree(void);
 void hash__early_init_devtree(void);
+void radix__early_init_devtree(void);
 extern void radix_init_native(void);
 extern void hash__early_init_mmu(void);
 extern void radix__early_init_mmu(void);
diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
index d02c6c9a..e0ab33d20a10 100644
--- a/arch/powerpc/mm/init_64.c
+++ b/arch/powerpc/mm/init_64.c
@@ -427,7 +427,9 @@ void __init mmu_early_init_devtree(void)
if (disable_radix)
cur_cpu_spec->mmu_features &= ~MMU_FTR_RADIX;
 
-   if (!radix_enabled())
+   if (radix_enabled())
+   radix__early_init_devtree();
+   else
hash__early_init_devtree();
 }
 #endif /* CONFIG_PPC_STD_MMU_64 */
diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch/powerpc/mm/pgtable-radix.c
index 003ff48a11b6..f34ccdbe0fbd 100644
--- a/arch/powerpc/mm/pgtable-radix.c
+++ b/arch/powerpc/mm/pgtable-radix.c
@@ -264,7 +264,7 @@ static int __init radix_dt_scan_page_sizes(unsigned long 
node,
return 1;
 }
 
-static void __init radix_init_page_sizes(void)
+void __init radix__early_init_devtree(void)
 {
int rc;
 
@@ -343,7 +343,6 @@ void __init radix__early_init_mmu(void)
__pte_frag_nr = H_PTE_FRAG_NR;
__pte_frag_size_shift = H_PTE_FRAG_SIZE_SHIFT;
 
-   radix_init_page_sizes();
if (!firmware_has_feature(FW_FEATURE_LPAR)) {
radix_init_native();
lpcr = mfspr(SPRN_LPCR);
-- 
2.7.4

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

[PATCH 3/6] powerpc/mm: Do hash device tree scanning earlier

2016-07-26 Thread Michael Ellerman
Currently MMU initialisation (early_init_mmu()) consists of a mixture of
scanning the device tree, setting MMU feature bits, and then also doing
actual initialisation of MMU data structures.

We'd like to decouple the setting of the MMU features from the actual
setup. So split out the device tree scanning, and associated code, and
call it from mmu_init_early_devtree().

Signed-off-by: Michael Ellerman 
---
 arch/powerpc/include/asm/book3s/64/mmu.h |  1 +
 arch/powerpc/mm/hash_utils_64.c  | 15 +--
 arch/powerpc/mm/init_64.c|  3 +++
 3 files changed, 13 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h 
b/arch/powerpc/include/asm/book3s/64/mmu.h
index 4eb4bd019716..358f1410dc0d 100644
--- a/arch/powerpc/include/asm/book3s/64/mmu.h
+++ b/arch/powerpc/include/asm/book3s/64/mmu.h
@@ -108,6 +108,7 @@ extern int mmu_io_psize;
 
 /* MMU initialization */
 void mmu_early_init_devtree(void);
+void hash__early_init_devtree(void);
 extern void radix_init_native(void);
 extern void hash__early_init_mmu(void);
 extern void radix__early_init_mmu(void);
diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index 1ff11c1bb182..5f922e93af25 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -759,12 +759,6 @@ static void __init htab_initialize(void)
 
DBG(" -> htab_initialize()\n");
 
-   /* Initialize segment sizes */
-   htab_init_seg_sizes();
-
-   /* Initialize page sizes */
-   htab_init_page_sizes();
-
if (mmu_has_feature(MMU_FTR_1T_SEGMENT)) {
mmu_kernel_ssize = MMU_SEGSIZE_1T;
mmu_highuser_ssize = MMU_SEGSIZE_1T;
@@ -885,6 +879,15 @@ static void __init htab_initialize(void)
 #undef KB
 #undef MB
 
+void __init hash__early_init_devtree(void)
+{
+   /* Initialize segment sizes */
+   htab_init_seg_sizes();
+
+   /* Initialize page sizes */
+   htab_init_page_sizes();
+}
+
 void __init hash__early_init_mmu(void)
 {
/*
diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
index 0d51e6e25db5..d02c6c9a 100644
--- a/arch/powerpc/mm/init_64.c
+++ b/arch/powerpc/mm/init_64.c
@@ -426,5 +426,8 @@ void __init mmu_early_init_devtree(void)
/* Disable radix mode based on kernel command line. */
if (disable_radix)
cur_cpu_spec->mmu_features &= ~MMU_FTR_RADIX;
+
+   if (!radix_enabled())
+   hash__early_init_devtree();
 }
 #endif /* CONFIG_PPC_STD_MMU_64 */
-- 
2.7.4

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

[PATCH 2/6] powerpc/mm: Move disable_radix handling into mmu_early_init_devtree()

2016-07-26 Thread Michael Ellerman
Move the handling of the disable_radix command line argument into the
newly created mmu_early_init_devtree().

It's an MMU option so it's preferable to have it in an mm related file,
and it also means platforms that don't support radix don't have to carry
the code.

Signed-off-by: Michael Ellerman 
---
 arch/powerpc/kernel/prom.c | 13 -
 arch/powerpc/mm/init_64.c  | 11 +++
 2 files changed, 11 insertions(+), 13 deletions(-)

diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
index 9686984e79c4..b4b6952e8991 100644
--- a/arch/powerpc/kernel/prom.c
+++ b/arch/powerpc/kernel/prom.c
@@ -647,14 +647,6 @@ static void __init early_reserve_mem(void)
 #endif
 }
 
-static bool disable_radix;
-static int __init parse_disable_radix(char *p)
-{
-   disable_radix = true;
-   return 0;
-}
-early_param("disable_radix", parse_disable_radix);
-
 void __init early_init_devtree(void *params)
 {
phys_addr_t limit;
@@ -744,11 +736,6 @@ void __init early_init_devtree(void *params)
 */
spinning_secondaries = boot_cpu_count - 1;
 #endif
-   /*
-* now fixup radix MMU mode based on kernel command line
-*/
-   if (disable_radix)
-   cur_cpu_spec->mmu_features &= ~MMU_FTR_RADIX;
 
mmu_early_init_devtree();
 
diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
index d0fb33ac3db2..0d51e6e25db5 100644
--- a/arch/powerpc/mm/init_64.c
+++ b/arch/powerpc/mm/init_64.c
@@ -413,7 +413,18 @@ EXPORT_SYMBOL_GPL(realmode_pfn_to_page);
 #endif /* CONFIG_SPARSEMEM_VMEMMAP/CONFIG_FLATMEM */
 
 #ifdef CONFIG_PPC_STD_MMU_64
+static bool disable_radix;
+static int __init parse_disable_radix(char *p)
+{
+   disable_radix = true;
+   return 0;
+}
+early_param("disable_radix", parse_disable_radix);
+
 void __init mmu_early_init_devtree(void)
 {
+   /* Disable radix mode based on kernel command line. */
+   if (disable_radix)
+   cur_cpu_spec->mmu_features &= ~MMU_FTR_RADIX;
 }
 #endif /* CONFIG_PPC_STD_MMU_64 */
-- 
2.7.4

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

[PATCH 1/6] powerpc/mm: Add mmu_early_init_devtree()

2016-07-26 Thread Michael Ellerman
Empty for now, but we'll add to it in the next patch.

Signed-off-by: Michael Ellerman 
---
 arch/powerpc/include/asm/book3s/64/mmu.h | 1 +
 arch/powerpc/include/asm/mmu.h   | 1 +
 arch/powerpc/kernel/prom.c   | 2 ++
 arch/powerpc/mm/init_64.c| 6 ++
 4 files changed, 10 insertions(+)

diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h 
b/arch/powerpc/include/asm/book3s/64/mmu.h
index d4eda6420523..4eb4bd019716 100644
--- a/arch/powerpc/include/asm/book3s/64/mmu.h
+++ b/arch/powerpc/include/asm/book3s/64/mmu.h
@@ -107,6 +107,7 @@ extern int mmu_vmemmap_psize;
 extern int mmu_io_psize;
 
 /* MMU initialization */
+void mmu_early_init_devtree(void);
 extern void radix_init_native(void);
 extern void hash__early_init_mmu(void);
 extern void radix__early_init_mmu(void);
diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/mmu.h
index 54471228f7b8..14220c5c12c9 100644
--- a/arch/powerpc/include/asm/mmu.h
+++ b/arch/powerpc/include/asm/mmu.h
@@ -210,6 +210,7 @@ extern void early_init_mmu(void);
 extern void early_init_mmu_secondary(void);
 extern void setup_initial_memory_limit(phys_addr_t first_memblock_base,
   phys_addr_t first_memblock_size);
+static inline void mmu_early_init_devtree(void) { }
 #endif /* __ASSEMBLY__ */
 #endif
 
diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
index bae3db791150..9686984e79c4 100644
--- a/arch/powerpc/kernel/prom.c
+++ b/arch/powerpc/kernel/prom.c
@@ -750,6 +750,8 @@ void __init early_init_devtree(void *params)
if (disable_radix)
cur_cpu_spec->mmu_features &= ~MMU_FTR_RADIX;
 
+   mmu_early_init_devtree();
+
 #ifdef CONFIG_PPC_POWERNV
/* Scan and build the list of machine check recoverable ranges */
of_scan_flat_dt(early_init_dt_scan_recoverable_ranges, NULL);
diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
index 33709bdb0419..d0fb33ac3db2 100644
--- a/arch/powerpc/mm/init_64.c
+++ b/arch/powerpc/mm/init_64.c
@@ -411,3 +411,9 @@ struct page *realmode_pfn_to_page(unsigned long pfn)
 EXPORT_SYMBOL_GPL(realmode_pfn_to_page);
 
 #endif /* CONFIG_SPARSEMEM_VMEMMAP/CONFIG_FLATMEM */
+
+#ifdef CONFIG_PPC_STD_MMU_64
+void __init mmu_early_init_devtree(void)
+{
+}
+#endif /* CONFIG_PPC_STD_MMU_64 */
-- 
2.7.4

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [RFC PATCH 0/9]powerpc: "paca->soft_enabled" based local atomic operation implementation

2016-07-26 Thread Madhavan Srinivasan



On Tuesday 26 July 2016 05:51 PM, Benjamin Herrenschmidt wrote:

On Mon, 2016-07-25 at 20:22 +0530, Madhavan Srinivasan wrote:

But this patchset uses Benjamin Herrenschmidt suggestion of using
arch_local_irq_disable_var() to soft_disable interrupts (including PMIs).
After finishing the "op", arch_local_irq_restore() called and correspondingly
interrupts are replayed if any occured.

I am not fan of "var", we probably want "level".


Sure. Will do



Also be careful, you might be already soft-disabled at level 1, you
must restore to level 1, not level 0 in that case. Might want to
actually return the level in "flags" and restore that.


Yes. Thats correct. I return the current flag value.

Maddy



Cheers,
Ben.



___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [RFC PATCH 0/9]powerpc: "paca->soft_enabled" based local atomic operation implementation

2016-07-26 Thread Benjamin Herrenschmidt
On Mon, 2016-07-25 at 20:22 +0530, Madhavan Srinivasan wrote:
> But this patchset uses Benjamin Herrenschmidt suggestion of using
> arch_local_irq_disable_var() to soft_disable interrupts (including PMIs).
> After finishing the "op", arch_local_irq_restore() called and correspondingly
> interrupts are replayed if any occured.

I am not fan of "var", we probably want "level".

Also be careful, you might be already soft-disabled at level 1, you
must restore to level 1, not level 0 in that case. Might want to
actually return the level in "flags" and restore that.

Cheers,
Ben.

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [v3] UCC_GETH/UCC_FAST: Use IS_ERR_VALUE_U32 API to avoid IS_ERR_VALUE abuses.

2016-07-26 Thread Arnd Bergmann
On Saturday, July 23, 2016 11:35:51 PM CEST Arvind Yadav wrote:
> diff --git a/include/linux/err.h b/include/linux/err.h
> index 1e35588..a42f942 100644
> --- a/include/linux/err.h
> +++ b/include/linux/err.h
> @@ -19,6 +19,7 @@
>  #ifndef __ASSEMBLY__
>  
>  #define IS_ERR_VALUE(x) unlikely((unsigned long)(void *)(x) >= (unsigned 
> long)-MAX_ERRNO)
> +#define IS_ERR_VALUE_U32(x) unlikely((unsigned int)(x) >= (unsigned 
> int)-MAX_ERRNO)
>  
>  static inline void * __must_check ERR_PTR(long error)
>  {

This doesn't really look like something we want to have as a generic
interface. The IS_ERR_VALUE() API is rather awkward already, and your
use seems specific to the cpu_muram_alloc() function.

How about something like

int cpm_muram_error(unsigned long addr)
{
if (addr >= (unsigned long)-MAX_ERRNO)
return addr;
else
return 0;
}

and then use that to check the value returned by the allocation
that is still an 'unsigned long', before assigning it to a 'u32'.

Arnd
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH V2 1/2] tty/hvc: Use IRQF_SHARED for OPAL hvc consoles

2016-07-26 Thread Michael Ellerman
Greg KH  writes:

> On Tue, Jul 26, 2016 at 02:11:11PM +1000, Michael Ellerman wrote:
>> Quoting Michael Ellerman (2016-07-11 16:29:20)
>> > Greg are you happy to take these two?
>> 
>> I don't see this series anywhere, do you mind if I take them via the
>> powerpc tree for 4.8 ? Or do you want to pick them up.
>
> You can take them, I'm not touching patches now until 4.8-rc1 is out,
> sorry.

No worries, I'll grab them.

cheers
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH] powernv/pci: Add PHB register dump debugfs handle

2016-07-26 Thread Michael Ellerman
Russell Currey  writes:

> On Tue, 2016-07-26 at 11:45 +1000, Michael Ellerman wrote:
>> Quoting Russell Currey (2016-07-22 15:23:36)
>> 
>> DEFINE_SIMPLE_ATTRIBUTE(fops_foo, NULL, foo_set, "%llu\n");
>> 
>> That requires that you write "1" to the file to trigger the reg dump.
>
> I don't think I can use this here.  Triggering the diag dump on the given PHB
> (these are in /sys/kernel/debug/powerpc/PCI), and that PHB is retrieved 
> from
> the file handler.  It looks like I have no access to the file struct if using 
> a
> simple getter/setter.

You don't have access to the file struct, but you don't need it, can
register the fops with a data pointer.

So the DEFINE_SIMPLE_ATTRIBUTE() gives you a fops_foo, which you can
then do:

  debugfs_create_file("dump-regs", 0200, phb->dbgfs, hose, _foo))

And then in foo_set() data == hose.

cheers
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

RE: [v3] UCC_GETH/UCC_FAST: Use IS_ERR_VALUE_U32 API to avoid IS_ERR_VALUE abuses.

2016-07-26 Thread David Laight
From: Arvind Yadav
> Sent: 23 July 2016 19:06
> IS_ERR_VALUE() assumes that its parameter is an unsigned long.
> It can not be used to check if an 'unsigned int' reflects an error.
> As they pass an 'unsigned int' into a function that takes an
> 'unsigned long' argument. This happens to work because the type
> is sign-extended on 64-bit architectures before it gets converted
> into an unsigned type.
> 
> However, anything that passes an 'unsigned short' or 'unsigned int'
> argument into IS_ERR_VALUE() is guaranteed to be broken, as are
> 8-bit integers and types that are wider than 'unsigned long'.
> 
> It would be nice to any users that are not passing 'unsigned int'
> arguments.

Isn't that a load of bollocks???
It is certainly very over-wordy.

IS_ERR_VALUE(x) is ((x) >= (unsigned long)-4096)

Assuming sizeof (short) == 2 && sizeof (int) == 4 && sizeof (long) == 8.

'signed char' and 'signed short' are first sign extended to 'signed int'.
'unsigned char' and 'unsigned short' are first zero extended to 'signed int'.
'signed int' is sign extended to 'signed long'.
'signed long' is converted to 'unsigned long' treating the bit-pattern as 
'unsigned long'.
'unsigned int' is zero extended to 'unsigned long'.

It is probably enough to say that on 64bit systems IS_ERR_VALUE() of unsigned 
int
is always false because the 32bit value is zero extended to 64 bits.

A possible 'fix' would be to define IS_ERR_VALUE() as:
#define IS_ERR_VALUE(x) unlikely(sizeof (x) > sizeof (int) ? (x) > (unsigned 
long)-MAX_ERRNO : (x) > (unsigned int)-MAX_ERRNO)

However correct analysis of every case might show up real errors.
So a compilation warning/error might be more appropriate.

David

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

[PATCH] powerpc: set used_vsr/used_vr/used_spe in sigreturn path when MSR bits are active

2016-07-26 Thread wei . guo . simon
From: Simon Guo 

Normally, when MSR[VSX/VR/SPE] bits = 1, the used_vsr/used_vr/used_spe
bit have already been set. However signal frame locates at user space
and it is controlled by user application. It is up to kernel to make
sure used_vsr/used_vr/used_spe(in kernel)=1 and consistent with MSR
bits.

For example, CRIU application, who utilizes sigreturn to restore
checkpointed process, will lead to the case where MSR[VSX] bit is
active in signal frame, but used_vsx bit is not set. (the same applies
to VR/SPE).

This patch will reinforce this at kernel by always setting used_* bit
when MSR related bits are active in signal frame and we are doing
sigreturn.

This patch is based on Ben's Proposal.

Cc: Paul Mackerras 
Cc: Michael Ellerman 
Cc: Anton Blanchard 
Cc: Cyril Bur 
Cc: Michael Neuling 
Cc: Andrew Morton 
Cc: "Amanieu d'Antras" 
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux-ker...@vger.kernel.org
Signed-off-by: Benjamin Herrenschmidt 
Signed-off-by: Simon Guo 
---
 arch/powerpc/kernel/signal_32.c |  6 ++
 arch/powerpc/kernel/signal_64.c | 11 ---
 2 files changed, 14 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/kernel/signal_32.c b/arch/powerpc/kernel/signal_32.c
index b6aa378..1bf074e 100644
--- a/arch/powerpc/kernel/signal_32.c
+++ b/arch/powerpc/kernel/signal_32.c
@@ -698,6 +698,7 @@ static long restore_user_regs(struct pt_regs *regs,
if (__copy_from_user(>thread.vr_state, >mc_vregs,
 sizeof(sr->mc_vregs)))
return 1;
+   current->thread.used_vr = true;
} else if (current->thread.used_vr)
memset(>thread.vr_state, 0,
   ELF_NVRREG * sizeof(vector128));
@@ -724,6 +725,7 @@ static long restore_user_regs(struct pt_regs *regs,
 */
if (copy_vsx_from_user(current, >mc_vsregs))
return 1;
+   current->thread.used_vsr = true;
} else if (current->thread.used_vsr)
for (i = 0; i < 32 ; i++)
current->thread.fp_state.fpr[i][TS_VSRLOWOFFSET] = 0;
@@ -743,6 +745,7 @@ static long restore_user_regs(struct pt_regs *regs,
if (__copy_from_user(current->thread.evr, >mc_vregs,
 ELF_NEVRREG * sizeof(u32)))
return 1;
+   current->thread.used_spe = true;
} else if (current->thread.used_spe)
memset(current->thread.evr, 0, ELF_NEVRREG * sizeof(u32));
 
@@ -799,6 +802,7 @@ static long restore_tm_user_regs(struct pt_regs *regs,
 _sr->mc_vregs,
 sizeof(sr->mc_vregs)))
return 1;
+   current->thread.used_vr = true;
} else if (current->thread.used_vr) {
memset(>thread.vr_state, 0,
   ELF_NVRREG * sizeof(vector128));
@@ -832,6 +836,7 @@ static long restore_tm_user_regs(struct pt_regs *regs,
if (copy_vsx_from_user(current, >mc_vsregs) ||
copy_transact_vsx_from_user(current, _sr->mc_vsregs))
return 1;
+   current->thread.used_vsr = true;
} else if (current->thread.used_vsr)
for (i = 0; i < 32 ; i++) {
current->thread.fp_state.fpr[i][TS_VSRLOWOFFSET] = 0;
@@ -848,6 +853,7 @@ static long restore_tm_user_regs(struct pt_regs *regs,
if (__copy_from_user(current->thread.evr, >mc_vregs,
 ELF_NEVRREG * sizeof(u32)))
return 1;
+   current->thread.used_spe = true;
} else if (current->thread.used_spe)
memset(current->thread.evr, 0, ELF_NEVRREG * sizeof(u32));
 
diff --git a/arch/powerpc/kernel/signal_64.c b/arch/powerpc/kernel/signal_64.c
index 2552079..8704269 100644
--- a/arch/powerpc/kernel/signal_64.c
+++ b/arch/powerpc/kernel/signal_64.c
@@ -363,9 +363,11 @@ static long restore_sigcontext(struct pt_regs *regs, 
sigset_t *set, int sig,
if (v_regs && !access_ok(VERIFY_READ, v_regs, 34 * sizeof(vector128)))
return -EFAULT;
/* Copy 33 vec registers (vr0..31 and vscr) from the stack */
-   if (v_regs != NULL && (msr & MSR_VEC) != 0)
+   if (v_regs != NULL && (msr & MSR_VEC) != 0) {
err |= __copy_from_user(>thread.vr_state, v_regs,
33 * sizeof(vector128));
+   current->thread.used_vr = true;
+   }
else if (current->thread.used_vr)
memset(>thread.vr_state, 0, 33 * sizeof(vector128));
/* Always get VRSAVE back */
@@ -385,9 +387,10 @@ static 

RE: [Patch v3 1/3] irqchip/qeic: move qeic driver from drivers/soc/fsl/qe

2016-07-26 Thread Qiang Zhao
Hi Jason,

On Mon, Jul 26, 2016 at 03:24AM, Jason Cooper wrote:
> -Original Message-
> From: Jason Cooper [mailto:ja...@lakedaemon.net]
> Sent: Tuesday, July 26, 2016 3:24 AM
> To: Qiang Zhao 
> Cc: o...@buserror.net; linuxppc-dev@lists.ozlabs.org; linux-
> ker...@vger.kernel.org; Xiaobo Xie 
> Subject: Re: [Patch v3 1/3] irqchip/qeic: move qeic driver from
> drivers/soc/fsl/qe
> 
> >
> >  static DEFINE_RAW_SPINLOCK(qe_ic_lock);
> >
> > diff --git a/drivers/soc/fsl/qe/Makefile b/drivers/soc/fsl/qe/Makefile
> > index 2031d38..51e4726 100644
> > --- a/drivers/soc/fsl/qe/Makefile
> > +++ b/drivers/soc/fsl/qe/Makefile
> > @@ -1,7 +1,7 @@
> >  #
> >  # Makefile for the linux ppc-specific parts of QE  #
> > -obj-$(CONFIG_QUICC_ENGINE)+= qe.o qe_common.o qe_ic.o qe_io.o
> > +obj-$(CONFIG_QUICC_ENGINE)+= qe.o qe_common.o qe_io.o
> >  obj-$(CONFIG_CPM)  += qe_common.o
> >  obj-$(CONFIG_UCC)  += ucc.o
> >  obj-$(CONFIG_UCC_SLOW) += ucc_slow.o
> > diff --git a/drivers/soc/fsl/qe/qe_ic.h b/drivers/soc/fsl/qe/qe_ic.h
> > deleted file mode 100644 index 926a2ed..000
> > --- a/drivers/soc/fsl/qe/qe_ic.h
> > +++ /dev/null
> > @@ -1,103 +0,0 @@
> > -/*
> > - * drivers/soc/fsl/qe/qe_ic.h
> > - *
> > - * QUICC ENGINE Interrupt Controller Header
> > - *
> > - * Copyright (C) 2006 Freescale Semiconductor, Inc. All rights reserved.
> > - *
> > - * Author: Li Yang 
> > - * Based on code from Shlomi Gridish 
> > - *
> > - * This program is free software; you can redistribute  it and/or
> > modify it
> > - * under  the terms of  the GNU General  Public License as published
> > by the
> > - * Free Software Foundation;  either version 2 of the  License, or
> > (at your
> > - * option) any later version.
> > - */
> 
> Please transfer this over as well, and update is as necessary.

Thank you for your review!
The author and the copyright is the same as the .c file, how to transfer this?
And could you tell me what should I do for update?

Thank you!
-Zhao Qiang
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH 3/3] powerpc: Convert fsl_rstcr_restart to a reset handler

2016-07-26 Thread Scott Wood
On Mon, 2016-07-25 at 21:25 -0700, Andrey Smirnov wrote:
> Convert fsl_rstcr_restart into a function to be registered with
> register_reset_handler() API and introduce fls_rstcr_restart_register()
> function that can be added as an initcall that would do aforementioned
> registration.
> 
> Signed-off-by: Andrey Smirnov 

Is there a particular motivation for this (e.g. new handlers you plan to
register elsewhere)?

> diff --git a/arch/powerpc/platforms/85xx/bsc913x_qds.c
> b/arch/powerpc/platforms/85xx/bsc913x_qds.c
> index 07dd6ae..14ea7a0 100644
> --- a/arch/powerpc/platforms/85xx/bsc913x_qds.c
> +++ b/arch/powerpc/platforms/85xx/bsc913x_qds.c
> @@ -53,6 +53,7 @@ static void __init bsc913x_qds_setup_arch(void)
>  }
>  
>  machine_arch_initcall(bsc9132_qds, mpc85xx_common_publish_devices);
> +machine_arch_initcall(bsc9133_qds, fsl_rstcr_restart_register);

Do we really still need to call the registration on a per-board basis, now
that boards have a way of registering a higher-priority notifier?  Can't we
just have setup_rstcr() do the registration when it finds the appropriate
device tree node?

> +int fsl_rstcr_restart_register(void)
> +{
> + static struct notifier_block restart_handler;
> +
> + restart_handler.notifier_call = fsl_rstcr_restart;
> + restart_handler.priority = 128;
> +
> + return register_restart_handler(_handler);
> +}
> +EXPORT_SYMBOL(fsl_rstcr_restart_register);

When would this ever get called from a module?

-Scott

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [RFC PATCH 7/9] powerpc: Add support to mask perf interrupts

2016-07-26 Thread Nicholas Piggin
On Tue, 26 Jul 2016 12:52:02 +0530
Madhavan Srinivasan  wrote:

> On Tuesday 26 July 2016 12:40 PM, Nicholas Piggin wrote:
> > On Tue, 26 Jul 2016 12:16:32 +0530
> > Madhavan Srinivasan  wrote:
> >  
> >> On Tuesday 26 July 2016 12:00 PM, Nicholas Piggin wrote:  
> >>> On Tue, 26 Jul 2016 11:55:51 +0530
> >>> Madhavan Srinivasan  wrote:
> >>> 
>  On Tuesday 26 July 2016 11:16 AM, Nicholas Piggin wrote:  
> > On Mon, 25 Jul 2016 20:22:20 +0530
> > Madhavan Srinivasan  wrote:
> >
> >> To support masking of the PMI interrupts, couple of new
> >> interrupt handler macros are added
> >> MASKABLE_EXCEPTION_PSERIES_OOL and
> >> MASKABLE_RELON_EXCEPTION_PSERIES_OOL. These are needed to
> >> include the SOFTEN_TEST and implement the support at both host
> >> and guest kernel.
> >>
> >> Couple of new irq #defs "PACA_IRQ_PMI" and "SOFTEN_VALUE_0xf0*"
> >> added to use in the exception code to check for PMI interrupts.
> >>
> >> __SOFTEN_TEST macro is modified to support the PMI interrupt.
> >> Present __SOFTEN_TEST code loads the soft_enabled from paca and
> >> check to call masked_interrupt handler code. To support both
> >> current behaviour and PMI masking, these changes are added,
> >>
> >> 1) Current LR register content are saved in R11
> >> 2) "bge" branch operation is changed to "bgel".
> >> 3) restore R11 to LR
> >>
> >> Reason:
> >>
> >> To retain PMI as NMI behaviour for flag state of 1, we save the
> >> LR regsiter value in R11 and branch to "masked_interrupt"
> >> handler with LR update. And in "masked_interrupt" handler, we
> >> check for the "SOFTEN_VALUE_*" value in R10 for PMI and branch
> >> back with "blr" if PMI.
> >>
> >> To mask PMI for a flag >1 value, masked_interrupt vaoid's the
> >> above check and continue to execute the masked_interrupt code
> >> and disabled MSR[EE] and updated the irq_happend with PMI info.
> >>
> >> Finally, saving of R11 is moved before calling SOFTEN_TEST in
> >> the __EXCEPTION_PROLOG_1 macro to support saving of LR values
> >> in SOFTEN_TEST.
> >>
> >> Signed-off-by: Madhavan Srinivasan 
> >> ---
> >> arch/powerpc/include/asm/exception-64s.h | 22
> >> --
> >> arch/powerpc/include/asm/hw_irq.h| 1 +
> >> arch/powerpc/kernel/exceptions-64s.S | 27
> >> --- 3 files changed, 45 insertions(+),
> >> 5 deletions(-)
> >>
> >> diff --git a/arch/powerpc/include/asm/exception-64s.h
> >> b/arch/powerpc/include/asm/exception-64s.h index
> >> 44d3f539d8a5..c951b7ab5108 100644 ---
> >> a/arch/powerpc/include/asm/exception-64s.h +++
> >> b/arch/powerpc/include/asm/exception-64s.h @@ -166,8 +166,8 @@
> >> END_FTR_SECTION_NESTED(ftr,ftr,943)
> >> OPT_SAVE_REG_TO_PACA(area+EX_CFAR, r10, CPU_FTR_CFAR);
> >> \ SAVE_CTR(r10, area);
> >> \ mfcr
> >> r9;\
> >> -
> >> extra(vec);
> >> \ std
> >> r11,area+EX_R11(r13);  \
> >> +
> >> extra(vec);
> >> \ std
> >> r12,area+EX_R12(r13);  \
> >> GET_SCRATCH0(r10);
> >> \ std  r10,area+EX_R13(r13) @@ -403,12 +403,17 @@
> >> label##_relon_hv:
> >> \ #define SOFTEN_VALUE_0xe82   PACA_IRQ_DBELL #define
> >> SOFTEN_VALUE_0xe60 PACA_IRQ_HMI #define
> >> SOFTEN_VALUE_0xe62 PACA_IRQ_HMI +#define
> >> SOFTEN_VALUE_0xf01 PACA_IRQ_PMI +#define
> >> SOFTEN_VALUE_0xf00 PACA_IRQ_PMI  
> > #define __SOFTEN_TEST(h,  
> >> vec)   \ lbz
> >> r10,PACASOFTIRQEN(r13);
> >> \ cmpwi
> >> r10,LAZY_INTERRUPT_DISABLED;   \
> >> li
> >> r10,SOFTEN_VALUE_##vec;
> >> \
> >> -  bge masked_##h##interrupt  
> > At which point, can't we pass in the interrupt level we want to
> > mask for to SOFTEN_TEST, and avoid all this extra code
> > changes?  
>  IIUC, we do pass the interrupt info to SOFTEN_TEST. Incase of
>  PMU interrupt we will have the value as PACA_IRQ_PMI.
> 
>  
> > PMU masked interrupt will compare with SOFTEN_LEVEL_PMU,
> > existing interrupts will compare with SOFTEN_LEVEL_EE (or
> > whatever suitable names there are).
> >
> >
> >> +  mflr
> >> r11;   \
> >> +  bgel
> >> masked_##h##interrupt; \
> >> +  mtlrr11;  
> > This might corrupt return prediction when masked_interrupt does
> > not  
>  Hmm this is a valid point.
>  
> > return. I guess 

Re: [RFC PATCH 7/9] powerpc: Add support to mask perf interrupts

2016-07-26 Thread Madhavan Srinivasan



On Tuesday 26 July 2016 12:40 PM, Nicholas Piggin wrote:

On Tue, 26 Jul 2016 12:16:32 +0530
Madhavan Srinivasan  wrote:


On Tuesday 26 July 2016 12:00 PM, Nicholas Piggin wrote:

On Tue, 26 Jul 2016 11:55:51 +0530
Madhavan Srinivasan  wrote:
  

On Tuesday 26 July 2016 11:16 AM, Nicholas Piggin wrote:

On Mon, 25 Jul 2016 20:22:20 +0530
Madhavan Srinivasan  wrote:
 

To support masking of the PMI interrupts, couple of new interrupt
handler macros are added MASKABLE_EXCEPTION_PSERIES_OOL and
MASKABLE_RELON_EXCEPTION_PSERIES_OOL. These are needed to include
the SOFTEN_TEST and implement the support at both host and guest
kernel.

Couple of new irq #defs "PACA_IRQ_PMI" and "SOFTEN_VALUE_0xf0*"
added to use in the exception code to check for PMI interrupts.

__SOFTEN_TEST macro is modified to support the PMI interrupt.
Present __SOFTEN_TEST code loads the soft_enabled from paca and
check to call masked_interrupt handler code. To support both
current behaviour and PMI masking, these changes are added,

1) Current LR register content are saved in R11
2) "bge" branch operation is changed to "bgel".
3) restore R11 to LR

Reason:

To retain PMI as NMI behaviour for flag state of 1, we save the
LR regsiter value in R11 and branch to "masked_interrupt"
handler with LR update. And in "masked_interrupt" handler, we
check for the "SOFTEN_VALUE_*" value in R10 for PMI and branch
back with "blr" if PMI.

To mask PMI for a flag >1 value, masked_interrupt vaoid's the
above check and continue to execute the masked_interrupt code and
disabled MSR[EE] and updated the irq_happend with PMI info.

Finally, saving of R11 is moved before calling SOFTEN_TEST in the
__EXCEPTION_PROLOG_1 macro to support saving of LR values in
SOFTEN_TEST.

Signed-off-by: Madhavan Srinivasan 
---
arch/powerpc/include/asm/exception-64s.h | 22
-- arch/powerpc/include/asm/hw_irq.h|
1 + arch/powerpc/kernel/exceptions-64s.S | 27
--- 3 files changed, 45 insertions(+), 5
deletions(-)

diff --git a/arch/powerpc/include/asm/exception-64s.h
b/arch/powerpc/include/asm/exception-64s.h index
44d3f539d8a5..c951b7ab5108 100644 ---
a/arch/powerpc/include/asm/exception-64s.h +++
b/arch/powerpc/include/asm/exception-64s.h @@ -166,8 +166,8 @@
END_FTR_SECTION_NESTED(ftr,ftr,943)
OPT_SAVE_REG_TO_PACA(area+EX_CFAR, r10, CPU_FTR_CFAR);
\ SAVE_CTR(r10, area);
\ mfcr
r9; \
-
extra(vec);
\ std
r11,area+EX_R11(r13);   \
+
extra(vec);
\ std
r12,area+EX_R12(r13);   \
GET_SCRATCH0(r10);
\ std   r10,area+EX_R13(r13) @@ -403,12 +403,17 @@
label##_relon_hv:
\ #define SOFTEN_VALUE_0xe82PACA_IRQ_DBELL #define
SOFTEN_VALUE_0xe60  PACA_IRQ_HMI #define
SOFTEN_VALUE_0xe62  PACA_IRQ_HMI +#define
SOFTEN_VALUE_0xf01  PACA_IRQ_PMI +#define
SOFTEN_VALUE_0xf00  PACA_IRQ_PMI

#define __SOFTEN_TEST(h,

vec)\ lbz
r10,PACASOFTIRQEN(r13); \
cmpwi
r10,LAZY_INTERRUPT_DISABLED;\
li
r10,SOFTEN_VALUE_##vec; \
-   bge masked_##h##interrupt

At which point, can't we pass in the interrupt level we want to
mask for to SOFTEN_TEST, and avoid all this extra code changes?

IIUC, we do pass the interrupt info to SOFTEN_TEST. Incase of
PMU interrupt we will have the value as PACA_IRQ_PMI.

  

PMU masked interrupt will compare with SOFTEN_LEVEL_PMU, existing
interrupts will compare with SOFTEN_LEVEL_EE (or whatever suitable
names there are).

 

+   mflr
r11;\
+   bgel
masked_##h##interrupt;  \
+   mtlrr11;

This might corrupt return prediction when masked_interrupt does
not

Hmm this is a valid point.
  

return. I guess that's uncommon case though.

No, it is. kernel mostly use irq_disable with (1) today and only in
specific case
we disable all the interrupts. So we are going to return almost
always when irqs are
soft diabled.

Since we need to support the PMIs as NMI when irq disable level is
1, we need to skip masked_interrupt.

As you mentioned if we have a separate macro (SOFTEN_TEST_PMU),
these can be avoided, but then it is code replication and we may
need to change some more macros. But this interesting, let me work
on this.

I would really prefer to do that, even if it means a little more
code.

Another option is to give an additional parameter to the MASKABLE
variants of the exception handlers, which you can pass in the
"mask level" into. I think it's not a bad idea to make it explicit
even for the existing ones so it's clear which level they are masked
at.

Issue here is that mask_interrupt function is not part of the
interrupt vector code 

Re: [RFC PATCH 7/9] powerpc: Add support to mask perf interrupts

2016-07-26 Thread Nicholas Piggin
On Tue, 26 Jul 2016 12:16:32 +0530
Madhavan Srinivasan  wrote:

> On Tuesday 26 July 2016 12:00 PM, Nicholas Piggin wrote:
> > On Tue, 26 Jul 2016 11:55:51 +0530
> > Madhavan Srinivasan  wrote:
> >  
> >> On Tuesday 26 July 2016 11:16 AM, Nicholas Piggin wrote:  
> >>> On Mon, 25 Jul 2016 20:22:20 +0530
> >>> Madhavan Srinivasan  wrote:
> >>> 
>  To support masking of the PMI interrupts, couple of new interrupt
>  handler macros are added MASKABLE_EXCEPTION_PSERIES_OOL and
>  MASKABLE_RELON_EXCEPTION_PSERIES_OOL. These are needed to include
>  the SOFTEN_TEST and implement the support at both host and guest
>  kernel.
> 
>  Couple of new irq #defs "PACA_IRQ_PMI" and "SOFTEN_VALUE_0xf0*"
>  added to use in the exception code to check for PMI interrupts.
> 
>  __SOFTEN_TEST macro is modified to support the PMI interrupt.
>  Present __SOFTEN_TEST code loads the soft_enabled from paca and
>  check to call masked_interrupt handler code. To support both
>  current behaviour and PMI masking, these changes are added,
> 
>  1) Current LR register content are saved in R11
>  2) "bge" branch operation is changed to "bgel".
>  3) restore R11 to LR
> 
>  Reason:
> 
>  To retain PMI as NMI behaviour for flag state of 1, we save the
>  LR regsiter value in R11 and branch to "masked_interrupt"
>  handler with LR update. And in "masked_interrupt" handler, we
>  check for the "SOFTEN_VALUE_*" value in R10 for PMI and branch
>  back with "blr" if PMI.
> 
>  To mask PMI for a flag >1 value, masked_interrupt vaoid's the
>  above check and continue to execute the masked_interrupt code and
>  disabled MSR[EE] and updated the irq_happend with PMI info.
> 
>  Finally, saving of R11 is moved before calling SOFTEN_TEST in the
>  __EXCEPTION_PROLOG_1 macro to support saving of LR values in
>  SOFTEN_TEST.
> 
>  Signed-off-by: Madhavan Srinivasan 
>  ---
> arch/powerpc/include/asm/exception-64s.h | 22
>  -- arch/powerpc/include/asm/hw_irq.h|
>  1 + arch/powerpc/kernel/exceptions-64s.S | 27
>  --- 3 files changed, 45 insertions(+), 5
>  deletions(-)
> 
>  diff --git a/arch/powerpc/include/asm/exception-64s.h
>  b/arch/powerpc/include/asm/exception-64s.h index
>  44d3f539d8a5..c951b7ab5108 100644 ---
>  a/arch/powerpc/include/asm/exception-64s.h +++
>  b/arch/powerpc/include/asm/exception-64s.h @@ -166,8 +166,8 @@
>  END_FTR_SECTION_NESTED(ftr,ftr,943)
>  OPT_SAVE_REG_TO_PACA(area+EX_CFAR, r10, CPU_FTR_CFAR);
>  \ SAVE_CTR(r10, area);
>  \ mfcr
>  r9;  \
>  -
>  extra(vec);
>  \ std
>  r11,area+EX_R11(r13);\
>  +
>  extra(vec);
>  \ std
>  r12,area+EX_R12(r13);\
>  GET_SCRATCH0(r10);
>  \ stdr10,area+EX_R13(r13) @@ -403,12 +403,17 @@
>  label##_relon_hv:
>  \ #define SOFTEN_VALUE_0xe82 PACA_IRQ_DBELL #define
>  SOFTEN_VALUE_0xe60   PACA_IRQ_HMI #define
>  SOFTEN_VALUE_0xe62   PACA_IRQ_HMI +#define
>  SOFTEN_VALUE_0xf01   PACA_IRQ_PMI +#define
>  SOFTEN_VALUE_0xf00   PACA_IRQ_PMI  
> >>> #define __SOFTEN_TEST(h,  
>  vec) \ lbz
>  r10,PACASOFTIRQEN(r13);  \
>  cmpwi
>  r10,LAZY_INTERRUPT_DISABLED; \
>  li
>  r10,SOFTEN_VALUE_##vec;  \
>  -bge masked_##h##interrupt  
> >>> At which point, can't we pass in the interrupt level we want to
> >>> mask for to SOFTEN_TEST, and avoid all this extra code changes?  
> >> IIUC, we do pass the interrupt info to SOFTEN_TEST. Incase of
> >> PMU interrupt we will have the value as PACA_IRQ_PMI.
> >>
> >>  
> >>>
> >>> PMU masked interrupt will compare with SOFTEN_LEVEL_PMU, existing
> >>> interrupts will compare with SOFTEN_LEVEL_EE (or whatever suitable
> >>> names there are).
> >>>
> >>> 
>  +mflr
>  r11; \
>  +bgel
>  masked_##h##interrupt;   \
>  +mtlrr11;  
> >>> This might corrupt return prediction when masked_interrupt does
> >>> not  
> >> Hmm this is a valid point.
> >>  
> >>> return. I guess that's uncommon case though.  
> >> No, it is. kernel mostly use irq_disable with (1) today and only in
> >> specific case
> >> we disable all the interrupts. So we are going to return almost
> >> always when irqs are
> >> soft diabled.
> >>
> >> Since we need to support the PMIs as NMI when irq disable level is
> >> 1, 

Re: [RFC PATCH 7/9] powerpc: Add support to mask perf interrupts

2016-07-26 Thread Madhavan Srinivasan



On Tuesday 26 July 2016 12:00 PM, Nicholas Piggin wrote:

On Tue, 26 Jul 2016 11:55:51 +0530
Madhavan Srinivasan  wrote:


On Tuesday 26 July 2016 11:16 AM, Nicholas Piggin wrote:

On Mon, 25 Jul 2016 20:22:20 +0530
Madhavan Srinivasan  wrote:
  

To support masking of the PMI interrupts, couple of new interrupt
handler macros are added MASKABLE_EXCEPTION_PSERIES_OOL and
MASKABLE_RELON_EXCEPTION_PSERIES_OOL. These are needed to include
the SOFTEN_TEST and implement the support at both host and guest
kernel.

Couple of new irq #defs "PACA_IRQ_PMI" and "SOFTEN_VALUE_0xf0*"
added to use in the exception code to check for PMI interrupts.

__SOFTEN_TEST macro is modified to support the PMI interrupt.
Present __SOFTEN_TEST code loads the soft_enabled from paca and
check to call masked_interrupt handler code. To support both
current behaviour and PMI masking, these changes are added,

1) Current LR register content are saved in R11
2) "bge" branch operation is changed to "bgel".
3) restore R11 to LR

Reason:

To retain PMI as NMI behaviour for flag state of 1, we save the LR
regsiter value in R11 and branch to "masked_interrupt" handler with
LR update. And in "masked_interrupt" handler, we check for the
"SOFTEN_VALUE_*" value in R10 for PMI and branch back with "blr" if
PMI.

To mask PMI for a flag >1 value, masked_interrupt vaoid's the above
check and continue to execute the masked_interrupt code and
disabled MSR[EE] and updated the irq_happend with PMI info.

Finally, saving of R11 is moved before calling SOFTEN_TEST in the
__EXCEPTION_PROLOG_1 macro to support saving of LR values in
SOFTEN_TEST.

Signed-off-by: Madhavan Srinivasan 
---
   arch/powerpc/include/asm/exception-64s.h | 22
-- arch/powerpc/include/asm/hw_irq.h|
1 + arch/powerpc/kernel/exceptions-64s.S | 27
--- 3 files changed, 45 insertions(+), 5
deletions(-)

diff --git a/arch/powerpc/include/asm/exception-64s.h
b/arch/powerpc/include/asm/exception-64s.h index
44d3f539d8a5..c951b7ab5108 100644 ---
a/arch/powerpc/include/asm/exception-64s.h +++
b/arch/powerpc/include/asm/exception-64s.h @@ -166,8 +166,8 @@
END_FTR_SECTION_NESTED(ftr,ftr,943)
OPT_SAVE_REG_TO_PACA(area+EX_CFAR, r10, CPU_FTR_CFAR);
\ SAVE_CTR(r10, area);
\ mfcr
r9; \
-
extra(vec);
\ std
r11,area+EX_R11(r13);   \
+
extra(vec);
\ std
r12,area+EX_R12(r13);   \
GET_SCRATCH0(r10);  \
std r10,area+EX_R13(r13) @@ -403,12 +403,17 @@
label##_relon_hv:   \
#define SOFTEN_VALUE_0xe82  PACA_IRQ_DBELL #define
SOFTEN_VALUE_0xe60  PACA_IRQ_HMI #define
SOFTEN_VALUE_0xe62  PACA_IRQ_HMI +#define
SOFTEN_VALUE_0xf01  PACA_IRQ_PMI +#define
SOFTEN_VALUE_0xf00  PACA_IRQ_PMI

#define __SOFTEN_TEST(h,

vec)\ lbz
r10,PACASOFTIRQEN(r13); \
cmpwi
r10,LAZY_INTERRUPT_DISABLED;\
li
r10,SOFTEN_VALUE_##vec; \
-   bge masked_##h##interrupt

At which point, can't we pass in the interrupt level we want to mask
for to SOFTEN_TEST, and avoid all this extra code changes?

IIUC, we do pass the interrupt info to SOFTEN_TEST. Incase of
PMU interrupt we will have the value as PACA_IRQ_PMI.




PMU masked interrupt will compare with SOFTEN_LEVEL_PMU, existing
interrupts will compare with SOFTEN_LEVEL_EE (or whatever suitable
names there are).

  

+   mflr
r11;\
+   bgel
masked_##h##interrupt;  \
+   mtlrr11;

This might corrupt return prediction when masked_interrupt does
not

Hmm this is a valid point.


return. I guess that's uncommon case though.

No, it is. kernel mostly use irq_disable with (1) today and only in
specific case
we disable all the interrupts. So we are going to return almost
always when irqs are
soft diabled.

Since we need to support the PMIs as NMI when irq disable level is 1,
we need to skip masked_interrupt.

As you mentioned if we have a separate macro (SOFTEN_TEST_PMU),
these can be avoided, but then it is code replication and we may need
to change some more macros. But this interesting, let me work on this.

I would really prefer to do that, even if it means a little more code.

Another option is to give an additional parameter to the MASKABLE
variants of the exception handlers, which you can pass in the
"mask level" into. I think it's not a bad idea to make it explicit
even for the existing ones so it's clear which level they are masked
at.


Issue here is that mask_interrupt function is not part of the
interrupt vector code (__EXCEPTION_PROLOG_1). So incase of PMI,
if we enter the mask_interrupt 

Re: [RFC PATCH 9/9] powerpc: rewrite local_t using soft_irq

2016-07-26 Thread Madhavan Srinivasan



On Tuesday 26 July 2016 11:23 AM, Nicholas Piggin wrote:

On Mon, 25 Jul 2016 20:22:22 +0530
Madhavan Srinivasan  wrote:


https://lkml.org/lkml/2008/12/16/450

Modifications to Rusty's benchmark code:
  - Executed only local_t test

Here are the values with the patch.

Time in ns per iteration

Local_t Without Patch   With Patch

_inc28  8
_add28  8
_read   3   3
_add_return 28  7

Tested the patch in a
  - pSeries LPAR (with perf record)

Very nice. I'd like to see these patches get in. We can
probably use the feature in other places too.


Thanks for review.

Maddy


Thanks,
Nick



___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [RFC PATCH 8/9] powerpc: Support to replay PMIs

2016-07-26 Thread Madhavan Srinivasan



On Tuesday 26 July 2016 11:20 AM, Nicholas Piggin wrote:

On Mon, 25 Jul 2016 20:22:21 +0530
Madhavan Srinivasan  wrote:


Code to replay the Performance Monitoring Interrupts(PMI).
In the masked_interrupt handler, for PMIs we reset the MSR[EE]
and return. This is due the fact that PMIs are level triggered.
In the __check_irq_replay(), we enabled the MSR[EE] which will
fire the interrupt for us.

Patch also adds a new arch_local_irq_disable_var() variant. New
variant takes an input value to write to the paca->soft_enabled.
This will be used in following patch to implement the tri-state
value for soft-enabled.

Same comment also applies about patches being standalone
transformations that work before and after. Some of these
can be squashed together I think.

Sure.





Signed-off-by: Madhavan Srinivasan 
---
  arch/powerpc/include/asm/hw_irq.h | 14 ++
  arch/powerpc/kernel/irq.c |  9 -
  2 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/hw_irq.h
b/arch/powerpc/include/asm/hw_irq.h index cc69dde6eb84..863179654452
100644 --- a/arch/powerpc/include/asm/hw_irq.h
+++ b/arch/powerpc/include/asm/hw_irq.h
@@ -81,6 +81,20 @@ static inline unsigned long
arch_local_irq_disable(void) return flags;
  }
  
+static inline unsigned long arch_local_irq_disable_var(int value)

+{
+   unsigned long flags, zero;
+
+   asm volatile(
+   "li %1,%3; lbz %0,%2(13); stb %1,%2(13)"
+   : "=r" (flags), "=" (zero)
+   : "i" (offsetof(struct paca_struct, soft_enabled)),\
+ "i" (value)
+   : "memory");
+
+   return flags;
+}

arch_ function suggests it is arch implementation of a generic
kernel function or something. I think our soft interrupt levels
are just used in powerpc specific code.

The name could also be a little more descriptive.

I would have our internal function be something like

soft_irq_set_level(), and then the arch disable just sets to
the appropriate level as it does today.

The PMU disable level could be implemented in powerpc specific
header with local_irq_and_pmu_disable() or something like that.


Yes. will do.



Thanks,
Nick



___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH 2/2] powerpc/64: Do load of PACAKBASE in LOAD_HANDLER

2016-07-26 Thread Nicholas Piggin
On Tue, 26 Jul 2016 15:29:30 +1000
Michael Ellerman  wrote:

> The LOAD_HANDLER macro requires that you have previously loaded "reg"
> with PACAKBASE. Although that gives callers flexibility to get
> PACAKBASE in some interesting way, none of the callers actually do
> that. So fold the load of PACAKBASE into the macro, making it simpler
> for callers to use correctly.
> 
> Signed-off-by: Michael Ellerman 

I don't see any problem with this.

Reviewed-by: Nick Piggin 
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [RFC PATCH 7/9] powerpc: Add support to mask perf interrupts

2016-07-26 Thread Nicholas Piggin
On Tue, 26 Jul 2016 11:55:51 +0530
Madhavan Srinivasan  wrote:

> On Tuesday 26 July 2016 11:16 AM, Nicholas Piggin wrote:
> > On Mon, 25 Jul 2016 20:22:20 +0530
> > Madhavan Srinivasan  wrote:
> >  
> >> To support masking of the PMI interrupts, couple of new interrupt
> >> handler macros are added MASKABLE_EXCEPTION_PSERIES_OOL and
> >> MASKABLE_RELON_EXCEPTION_PSERIES_OOL. These are needed to include
> >> the SOFTEN_TEST and implement the support at both host and guest
> >> kernel.
> >>
> >> Couple of new irq #defs "PACA_IRQ_PMI" and "SOFTEN_VALUE_0xf0*"
> >> added to use in the exception code to check for PMI interrupts.
> >>
> >> __SOFTEN_TEST macro is modified to support the PMI interrupt.
> >> Present __SOFTEN_TEST code loads the soft_enabled from paca and
> >> check to call masked_interrupt handler code. To support both
> >> current behaviour and PMI masking, these changes are added,
> >>
> >> 1) Current LR register content are saved in R11
> >> 2) "bge" branch operation is changed to "bgel".
> >> 3) restore R11 to LR
> >>
> >> Reason:
> >>
> >> To retain PMI as NMI behaviour for flag state of 1, we save the LR
> >> regsiter value in R11 and branch to "masked_interrupt" handler with
> >> LR update. And in "masked_interrupt" handler, we check for the
> >> "SOFTEN_VALUE_*" value in R10 for PMI and branch back with "blr" if
> >> PMI.
> >>
> >> To mask PMI for a flag >1 value, masked_interrupt vaoid's the above
> >> check and continue to execute the masked_interrupt code and
> >> disabled MSR[EE] and updated the irq_happend with PMI info.
> >>
> >> Finally, saving of R11 is moved before calling SOFTEN_TEST in the
> >> __EXCEPTION_PROLOG_1 macro to support saving of LR values in
> >> SOFTEN_TEST.
> >>
> >> Signed-off-by: Madhavan Srinivasan 
> >> ---
> >>   arch/powerpc/include/asm/exception-64s.h | 22
> >> -- arch/powerpc/include/asm/hw_irq.h|
> >> 1 + arch/powerpc/kernel/exceptions-64s.S | 27
> >> --- 3 files changed, 45 insertions(+), 5
> >> deletions(-)
> >>
> >> diff --git a/arch/powerpc/include/asm/exception-64s.h
> >> b/arch/powerpc/include/asm/exception-64s.h index
> >> 44d3f539d8a5..c951b7ab5108 100644 ---
> >> a/arch/powerpc/include/asm/exception-64s.h +++
> >> b/arch/powerpc/include/asm/exception-64s.h @@ -166,8 +166,8 @@
> >> END_FTR_SECTION_NESTED(ftr,ftr,943)
> >> OPT_SAVE_REG_TO_PACA(area+EX_CFAR, r10, CPU_FTR_CFAR);
> >> \ SAVE_CTR(r10, area);
> >> \ mfcr
> >> r9;\
> >> -
> >> extra(vec);
> >> \ std
> >> r11,area+EX_R11(r13);  \
> >> +
> >> extra(vec);
> >> \ std
> >> r12,area+EX_R12(r13);  \
> >> GET_SCRATCH0(r10); \
> >> stdr10,area+EX_R13(r13) @@ -403,12 +403,17 @@
> >> label##_relon_hv:  \
> >> #define SOFTEN_VALUE_0xe82 PACA_IRQ_DBELL #define
> >> SOFTEN_VALUE_0xe60 PACA_IRQ_HMI #define
> >> SOFTEN_VALUE_0xe62 PACA_IRQ_HMI +#define
> >> SOFTEN_VALUE_0xf01 PACA_IRQ_PMI +#define
> >> SOFTEN_VALUE_0xf00 PACA_IRQ_PMI  
> > #define __SOFTEN_TEST(h,  
> >> vec)   \ lbz
> >> r10,PACASOFTIRQEN(r13);\
> >> cmpwi
> >> r10,LAZY_INTERRUPT_DISABLED;   \
> >> li
> >> r10,SOFTEN_VALUE_##vec;\
> >> -  bge masked_##h##interrupt  
> > At which point, can't we pass in the interrupt level we want to mask
> > for to SOFTEN_TEST, and avoid all this extra code changes?  
> IIUC, we do pass the interrupt info to SOFTEN_TEST. Incase of
> PMU interrupt we will have the value as PACA_IRQ_PMI.
> 
> 
> >
> >
> > PMU masked interrupt will compare with SOFTEN_LEVEL_PMU, existing
> > interrupts will compare with SOFTEN_LEVEL_EE (or whatever suitable
> > names there are).
> >
> >  
> >> +  mflr
> >> r11;   \
> >> +  bgel
> >> masked_##h##interrupt; \
> >> +  mtlrr11;  
> > This might corrupt return prediction when masked_interrupt does
> > not  
> Hmm this is a valid point.
> 
> > return. I guess that's uncommon case though.  
> 
> No, it is. kernel mostly use irq_disable with (1) today and only in 
> specific case
> we disable all the interrupts. So we are going to return almost
> always when irqs are
> soft diabled.
> 
> Since we need to support the PMIs as NMI when irq disable level is 1,
> we need to skip masked_interrupt.
> 
> As you mentioned if we have a separate macro (SOFTEN_TEST_PMU),
> these can be avoided, but then it is code replication and we may need
> to change some more macros. But this interesting, let me work on this.

I would really prefer to do that, even if it means a little more code.

Another option is to give 

Re: [RFC PATCH 7/9] powerpc: Add support to mask perf interrupts

2016-07-26 Thread Madhavan Srinivasan



On Tuesday 26 July 2016 11:16 AM, Nicholas Piggin wrote:

On Mon, 25 Jul 2016 20:22:20 +0530
Madhavan Srinivasan  wrote:


To support masking of the PMI interrupts, couple of new interrupt
handler macros are added MASKABLE_EXCEPTION_PSERIES_OOL and
MASKABLE_RELON_EXCEPTION_PSERIES_OOL. These are needed to include the
SOFTEN_TEST and implement the support at both host and guest kernel.

Couple of new irq #defs "PACA_IRQ_PMI" and "SOFTEN_VALUE_0xf0*" added
to use in the exception code to check for PMI interrupts.

__SOFTEN_TEST macro is modified to support the PMI interrupt.
Present __SOFTEN_TEST code loads the soft_enabled from paca and check
to call masked_interrupt handler code. To support both current
behaviour and PMI masking, these changes are added,

1) Current LR register content are saved in R11
2) "bge" branch operation is changed to "bgel".
3) restore R11 to LR

Reason:

To retain PMI as NMI behaviour for flag state of 1, we save the LR
regsiter value in R11 and branch to "masked_interrupt" handler with
LR update. And in "masked_interrupt" handler, we check for the
"SOFTEN_VALUE_*" value in R10 for PMI and branch back with "blr" if
PMI.

To mask PMI for a flag >1 value, masked_interrupt vaoid's the above
check and continue to execute the masked_interrupt code and disabled
MSR[EE] and updated the irq_happend with PMI info.

Finally, saving of R11 is moved before calling SOFTEN_TEST in the
__EXCEPTION_PROLOG_1 macro to support saving of LR values in
SOFTEN_TEST.

Signed-off-by: Madhavan Srinivasan 
---
  arch/powerpc/include/asm/exception-64s.h | 22 --
  arch/powerpc/include/asm/hw_irq.h|  1 +
  arch/powerpc/kernel/exceptions-64s.S | 27
--- 3 files changed, 45 insertions(+), 5
deletions(-)

diff --git a/arch/powerpc/include/asm/exception-64s.h
b/arch/powerpc/include/asm/exception-64s.h index
44d3f539d8a5..c951b7ab5108 100644 ---
a/arch/powerpc/include/asm/exception-64s.h +++
b/arch/powerpc/include/asm/exception-64s.h @@ -166,8 +166,8 @@
END_FTR_SECTION_NESTED(ftr,ftr,943)
OPT_SAVE_REG_TO_PACA(area+EX_CFAR, r10, CPU_FTR_CFAR);
\ SAVE_CTR(r10, area);
\ mfcr
r9; \
-
extra(vec); \
std
r11,area+EX_R11(r13);   \
+
extra(vec); \
std
r12,area+EX_R12(r13);   \
GET_SCRATCH0(r10);  \
std r10,area+EX_R13(r13) @@ -403,12 +403,17 @@
label##_relon_hv:   \
#define SOFTEN_VALUE_0xe82  PACA_IRQ_DBELL #define
SOFTEN_VALUE_0xe60  PACA_IRQ_HMI #define
SOFTEN_VALUE_0xe62  PACA_IRQ_HMI +#define
SOFTEN_VALUE_0xf01  PACA_IRQ_PMI +#define
SOFTEN_VALUE_0xf00  PACA_IRQ_PMI

#define __SOFTEN_TEST(h,

vec)\ lbz
r10,PACASOFTIRQEN(r13); \
cmpwi
r10,LAZY_INTERRUPT_DISABLED;\
li
r10,SOFTEN_VALUE_##vec; \
-   bge masked_##h##interrupt

At which point, can't we pass in the interrupt level we want to mask
for to SOFTEN_TEST, and avoid all this extra code changes?

IIUC, we do pass the interrupt info to SOFTEN_TEST. Incase of
PMU interrupt we will have the value as PACA_IRQ_PMI.





PMU masked interrupt will compare with SOFTEN_LEVEL_PMU, existing
interrupts will compare with SOFTEN_LEVEL_EE (or whatever suitable
names there are).



+   mflr
r11;\
+   bgel
masked_##h##interrupt;  \
+   mtlrr11;

This might corrupt return prediction when masked_interrupt does not

Hmm this is a valid point.


return. I guess that's uncommon case though.


No, it is. kernel mostly use irq_disable with (1) today and only in 
specific case
we disable all the interrupts. So we are going to return almost always 
when irqs are

soft diabled.

Since we need to support the PMIs as NMI when irq disable level is 1,
we need to skip masked_interrupt.

As you mentioned if we have a separate macro (SOFTEN_TEST_PMU),
these can be avoided, but then it is code replication and we may need
to change some more macros. But this interesting, let me work on this.


Maddy

  But I think we can avoid
this if we do the above, no?

Thanks,
Nick



___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH 1/2] powerpc/64: Correct comment on LOAD_HANDLER()

2016-07-26 Thread Nicholas Piggin
On Tue, 26 Jul 2016 15:29:29 +1000
Michael Ellerman  wrote:

> The comment for LOAD_HANDLER() was wrong. The part about kdump has not
> been true since 1f6a93e4c35e ("powerpc: Make it possible to move the
> interrupt handlers away from the kernel").
> 
> Describe how it currently works, and combine the two separate comments
> into one.
> 
> Signed-off-by: Michael Ellerman 

Reviewed-by: Nick Piggin 

> ---
>  arch/powerpc/include/asm/exception-64s.h | 8 
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/exception-64s.h
> b/arch/powerpc/include/asm/exception-64s.h index
> 93ae809fe5ea..4ff3e2f16b5d 100644 ---
> a/arch/powerpc/include/asm/exception-64s.h +++
> b/arch/powerpc/include/asm/exception-64s.h @@ -84,12 +84,12 @@
>  
>  /*
>   * We're short on space and time in the exception prolog, so we can't
> - * use the normal SET_REG_IMMEDIATE macro. Normally we just need the
> - * low halfword of the address, but for Kdump we need the whole low
> - * word.
> + * use the normal LOAD_REG_IMMEDIATE macro to load the address of
> label.
> + * Instead we get the base of the kernel from paca->kernelbase and
> or in the low
> + * part of label. This requires that the label be within 64KB of
> kernelbase, and
> + * that kernelbase be 64K aligned.
>   */
>  #define LOAD_HANDLER(reg,
> label)\
> - /* Handlers must be within 64K of kbase, which must be 64k
> aligned */ \ ori  reg,reg,(label)-_stext; /* virt addr
> of handler ... */ 
>  /* Exception register prefixes */

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [RFC PATCH 1/9] Add #defs for paca->soft_enabled flags

2016-07-26 Thread Nicholas Piggin
On Tue, 26 Jul 2016 11:35:16 +0530
Madhavan Srinivasan  wrote:

> On Tuesday 26 July 2016 10:57 AM, Nicholas Piggin wrote:
> > On Mon, 25 Jul 2016 20:22:14 +0530
> > Madhavan Srinivasan  wrote:
> >  
> >> Two #defs LAZY_INTERRUPT_ENABLED and
> >> LAZY_INTERRUPT_DISABLED are added to be used
> >> when updating paca->soft_enabled.  
> > This is a very nice patchset, but can this not be a new name?  
> 
> Thanks, but idea is from ben :)
> Regarding the name, I looked at the initial patchset posted by
> paul and took the name from it :).
> 
> But will work on that, any suggestion for the name?

I don't have a strong preference. LAZY_* is not horrible itself,
it's just that softe variant is used elsewhere. I don't mind if
you rename softe to something else completely (although Ben might).
Allow me to apply the first coat of paint to the bikeshed:

irq_disable_level

IRQ_DISABLE_LEVEL_NONE
IRQ_DISABLE_LEVEL_LINUX
IRQ_DISABLE_LEVEL_PMU

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [RFC PATCH 6/9] powerpc: modify __SOFTEN_TEST to support tri-state soft_enabled flag

2016-07-26 Thread Madhavan Srinivasan



On Tuesday 26 July 2016 11:11 AM, Nicholas Piggin wrote:

On Mon, 25 Jul 2016 20:22:19 +0530
Madhavan Srinivasan  wrote:


Foundation patch to support checking of new flag for
"paca->soft_enabled". Modify the condition checking for the
"soft_enabled" from "equal" to "greater than or equal to".

Rather than a "tri-state" and the mystery "2" state, can you
make a #define for that guy, and use levels.


Yes. Will do. Will wait for any feedback on the macro name
for the patch 1 of this series.

Maddy


0-> all enabled
1-> "linux" interrupts disabled
2-> PMU also disabled
etc.

Thanks,
Nick



___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [RFC PATCH 5/9] powerpc: reverse the soft_enable logic

2016-07-26 Thread Madhavan Srinivasan



On Tuesday 26 July 2016 11:01 AM, Nicholas Piggin wrote:

On Mon, 25 Jul 2016 20:22:18 +0530
Madhavan Srinivasan  wrote:


"paca->soft_enabled" is used as a flag to mask some of interrupts.
Currently supported flags values and their details:

soft_enabledMSR[EE]

0   0   Disabled (PMI and HMI not masked)
1   1   Enabled

"paca->soft_enabled" is initialed to 1 to make the interripts as
enabled. arch_local_irq_disable() will toggle the value when
interrupts needs to disbled. At this point, the interrupts are not
actually disabled, instead, interrupt vector has code to check for
the flag and mask it when it occurs. By "mask it", it updated
interrupt paca->irq_happened and return. arch_local_irq_restore() is
called to re-enable interrupts, which checks and replays interrupts
if any occured.

Now, as mentioned, current logic doesnot mask "performance monitoring
interrupts" and PMIs are implemented as NMI. But this patchset
depends on local_irq_* for a successful local_* update. Meaning, mask
all possible interrupts during local_* update and replay them after
the update.

So the idea here is to reserve the "paca->soft_enabled" logic. New
values and details:

soft_enabledMSR[EE]

1   0   Disabled  (PMI and HMI not masked)
0   1   Enabled

Reason for the this change is to create foundation for a third flag
value "2" for "soft_enabled" to add support to mask PMIs. When
arch_irq_disable_* is called with a value "2", PMI interrupts are
mask. But when called with a value of "1", PMI are not mask.

With new flag value for "soft_enabled", states looks like:

soft_enabledMSR[EE]

2   0   Disbaled PMIs also
1   0   Disabled  (PMI and HMI not masked)
0   1   Enabled

And interrupt handler code for checking has been modified to check for
for "greater than or equal" to 1 condition instead.

This bit of the patch seems to have been moved into other part
of the series. Ideally (unless there is a good reason), it is nice
to have each individual patch result in a working kernel before
and after.

Agreed. But I need to reason out the change and hence add
all info here. But will edit the info in the next version.

Maddy




Nice way to avoid adding more branches though.

Thanks,
Nick



___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [RFC PATCH 1/9] Add #defs for paca->soft_enabled flags

2016-07-26 Thread Madhavan Srinivasan



On Tuesday 26 July 2016 10:57 AM, Nicholas Piggin wrote:

On Mon, 25 Jul 2016 20:22:14 +0530
Madhavan Srinivasan  wrote:


Two #defs LAZY_INTERRUPT_ENABLED and
LAZY_INTERRUPT_DISABLED are added to be used
when updating paca->soft_enabled.

This is a very nice patchset, but can this not be a new name?


Thanks, but idea is from ben :)
Regarding the name, I looked at the initial patchset posted by
paul and took the name from it :).

But will work on that, any suggestion for the name?

Maddy


We use "soft enabled/disabled" everywhere for it. I think lazy
is an implementation detail anyway because some interrupts don't
cause a hard disable at all.

Thanks,
Nick



___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev