Re: [PATCH] powerpc: workaround clang codegen bug in dcbz

2019-07-29 Thread Christophe Leroy




Le 29/07/2019 à 22:32, Nathan Chancellor a écrit :

On Mon, Jul 29, 2019 at 01:25:41PM -0700, Nick Desaulniers wrote:

Commit 6c5875843b87 ("powerpc: slightly improve cache helpers") exposed
what looks like a codegen bug in Clang's handling of `%y` output
template with `Z` constraint. This is resulting in panics during boot
for 32b powerpc builds w/ Clang, as reported by our CI.

Add back the original code that worked behind a preprocessor check for
__clang__ until we can fix LLVM.

Further, it seems that clang allnoconfig builds are unhappy with `Z`, as
reported by 0day bot. This is likely because Clang warns about inline
asm constraints when the constraint requires inlining to be semantically
valid.

Link: https://bugs.llvm.org/show_bug.cgi?id=42762
Link: https://github.com/ClangBuiltLinux/linux/issues/593
Link: 
https://lore.kernel.org/lkml/20190721075846.GA97701@archlinux-threadripper/
Debugged-by: Nathan Chancellor 
Reported-by: Nathan Chancellor 
Reported-by: kbuild test robot 
Suggested-by: Nathan Chancellor 
Signed-off-by: Nick Desaulniers 
---
Alternatively, we could just revert 6c5875843b87. It seems that GCC
generates the same code for these functions for out of line versions.
But I'm not sure how the inlined code generated would be affected.


For the record:

https://godbolt.org/z/z57VU7

This seems consistent with what Michael found so I don't think a revert
is entirely unreasonable.


Your example functions are too simple to show anything. The functions 
takes only one parameter so of course GCC won't use two registers 
allthough given the opportunity.


Christophe



Either way:

Reviewed-by: Nathan Chancellor 



Re: [DOC][PATCH v5 1/4] powerpc: Document some HCalls for Storage Class Memory

2019-07-29 Thread Vaibhav Jain


Thanks everyone for reviewing this patch-set. The V4 got merged upstream which
didn't have this DOC-patch. So, I will re-spin a separate and independent
doc-patch incorporating your review comments.

Cheers,
-- 
Vaibhav Jain 
Linux Technology Center, IBM India Pvt. Ltd.



CVE-2019-13648: Linux kernel: powerpc: kernel crash in TM handling triggerable by any local user

2019-07-29 Thread Michael Neuling
The Linux kernel for powerpc since v3.9 has a bug in the TM handling  where any
unprivileged local user may crash the operating system.

This bug affects machines using 64-bit CPUs where Transactional Memory (TM) is
not present or has been disabled (see below for more details on affected CPUs).

To trigger the bug a process constructs a signal context which still has the MSR
TS bits set. That process then passes this signal context to the sigreturn()
system call. When returning back to userspace, the kernel then crashes with a
bad TM transition (TM Bad Thing) or by executing TM code on a non-TM system.

All 64bit machines where TM is not present are affected. This includes PowerPC
970 (G5), PA6T, POWER5/6/7 VMs under KVM or LPARs under PowerVM and POWER9 bare
metal. 

Additionally systems with TM hardware but where TM is disabled in software (via
ppc_tm=off kernel cmdline) are also affected. This includes POWER8/9 VMs under
KVM or LPARs under PowerVM and POWER8 bare metal.

The bug was introduced in commit:
  2b0a576d15e0 ("powerpc: Add new transactional memory state to the signal 
context")

Which was originally merged in v3.9. 

The upstream fix is here:
  https://git.kernel.org/torvalds/c/f16d80b75a096c52354c6e0a574993f3b0dfbdfe

The fix can be verified by running `sigfuz -m` from the kernel selftests:
 
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/tools/testing/selftests/powerpc/signal/sigfuz.c?h=v5.2

cheers
Mikey



Re: [PATCH] drivers/macintosh/smu.c: Mark expected switch fall-through

2019-07-29 Thread Stephen Rothwell
Hi all,

On Tue, 30 Jul 2019 14:37:04 +1000 Stephen Rothwell  
wrote:
>
> Mark switch cases where we are expecting to fall through.
> 
> This patch fixes the following warning (Building: powerpc):
> 
> drivers/macintosh/smu.c: In function 'smu_queue_i2c':
> drivers/macintosh/smu.c:854:21: warning: this statement may fall through 
> [-Wimplicit-fallthrough=]
>cmd->info.devaddr &= 0xfe;
>~~^~~
> drivers/macintosh/smu.c:855:2: note: here
>   case SMU_I2C_TRANSFER_STDSUB:
>   ^~~~
> 
> Cc: Benjamin Herrenschmidt 
> Cc: Gustavo A. R. Silva 
> Cc: Kees Cook 
> Signed-off-by: Stephen Rothwell 

Fixes: 0365ba7fb1fa ("[PATCH] ppc64: SMU driver update & i2c support")

Sorry, forgot :-)
-- 
Cheers,
Stephen Rothwell


pgpmahocRTNij.pgp
Description: OpenPGP digital signature


[PATCH] drivers/macintosh/smu.c: Mark expected switch fall-through

2019-07-29 Thread Stephen Rothwell
Mark switch cases where we are expecting to fall through.

This patch fixes the following warning (Building: powerpc):

drivers/macintosh/smu.c: In function 'smu_queue_i2c':
drivers/macintosh/smu.c:854:21: warning: this statement may fall through 
[-Wimplicit-fallthrough=]
   cmd->info.devaddr &= 0xfe;
   ~~^~~
drivers/macintosh/smu.c:855:2: note: here
  case SMU_I2C_TRANSFER_STDSUB:
  ^~~~

Cc: Benjamin Herrenschmidt 
Cc: Gustavo A. R. Silva 
Cc: Kees Cook 
Signed-off-by: Stephen Rothwell 
---
 drivers/macintosh/smu.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/macintosh/smu.c b/drivers/macintosh/smu.c
index 276065c888bc..23f1f41c8602 100644
--- a/drivers/macintosh/smu.c
+++ b/drivers/macintosh/smu.c
@@ -852,6 +852,7 @@ int smu_queue_i2c(struct smu_i2c_cmd *cmd)
break;
case SMU_I2C_TRANSFER_COMBINED:
cmd->info.devaddr &= 0xfe;
+   /* fall through */
case SMU_I2C_TRANSFER_STDSUB:
if (cmd->info.sublen > 3)
return -EINVAL;
-- 
2.22.0

-- 
Cheers,
Stephen Rothwell


pgpVqeJHPHEWs.pgp
Description: OpenPGP digital signature


Re: [PATCH] powerpc/kvm: Fall through switch case explicitly

2019-07-29 Thread Stephen Rothwell
Hi Gustavo,

On Mon, 29 Jul 2019 22:57:05 -0500 "Gustavo A. R. Silva" 
 wrote:
>
> On 7/29/19 10:50 PM, Stephen Rothwell wrote:
> > 
> > I am assuming that Michael Ellerman will take it into his fixes tree
> > and send it to Lunis fairly soon as it actually breaks some powerpc
> > builds.
> 
> Yeah. It seems that now that -Wimplicit-fallthrough has been globally enabled,
> that's the case for all of these patches.

Only some of them cause failures (as opposed to warnings) .. in this
case because arch/powerpc is built with -Werror.

-- 
Cheers,
Stephen Rothwell


pgpeSfCEAY6SV.pgp
Description: OpenPGP digital signature


Re: [PATCH] powerpc/kvm: Fall through switch case explicitly

2019-07-29 Thread Gustavo A. R. Silva



On 7/29/19 10:50 PM, Stephen Rothwell wrote:
> Hi Gustavo,
> 
> On Mon, 29 Jul 2019 22:30:34 -0500 "Gustavo A. R. Silva" 
>  wrote:
>>
>> If no one takes it by tomorrow, I'll take it in my tree.
> 
> I am assuming that Michael Ellerman will take it into his fixes tree
> and send it to Lunis fairly soon as it actually breaks some powerpc
> builds.
> 

Yeah. It seems that now that -Wimplicit-fallthrough has been globally enabled,
that's the case for all of these patches.

Anyway, I can always take them in my tree if needed.

Thanks
--
Gustavo


Re: [PATCH] powerpc/kvm: Fall through switch case explicitly

2019-07-29 Thread Stephen Rothwell
Hi Gustavo,

On Mon, 29 Jul 2019 22:30:34 -0500 "Gustavo A. R. Silva" 
 wrote:
>
> If no one takes it by tomorrow, I'll take it in my tree.

I am assuming that Michael Ellerman will take it into his fixes tree
and send it to Lunis fairly soon as it actually breaks some powerpc
builds.

-- 
Cheers,
Stephen Rothwell


pgp3n8MKI4xi5.pgp
Description: OpenPGP digital signature


Re: [PATCH] powerpc/kvm: Fall through switch case explicitly

2019-07-29 Thread Gustavo A. R. Silva
Hi Stephen,

On 7/29/19 10:18 PM, Stephen Rothwell wrote:
> Hi all,
> 
> On Mon, 29 Jul 2019 18:45:40 -0500 "Gustavo A. R. Silva" 
>  wrote:
>>
>> On 7/29/19 3:16 AM, Stephen Rothwell wrote:
>>>
>>> On Mon, 29 Jul 2019 11:25:36 +0530 Santosh Sivaraj  
>>> wrote:  

 Implicit fallthrough warning was enabled globally which broke
 the build. Make it explicit with a `fall through` comment.

 Signed-off-by: Santosh Sivaraj   
>>
>> Reviewed-by: Gustavo A. R. Silva 
>>
>> Thanks!
>> --
>> Gustavo
>>
 ---
  arch/powerpc/kvm/book3s_32_mmu.c | 1 +
  1 file changed, 1 insertion(+)

 diff --git a/arch/powerpc/kvm/book3s_32_mmu.c 
 b/arch/powerpc/kvm/book3s_32_mmu.c
 index 653936177857..18f244aad7aa 100644
 --- a/arch/powerpc/kvm/book3s_32_mmu.c
 +++ b/arch/powerpc/kvm/book3s_32_mmu.c
 @@ -239,6 +239,7 @@ static int kvmppc_mmu_book3s_32_xlate_pte(struct 
 kvm_vcpu *vcpu, gva_t eaddr,
case 2:
case 6:
pte->may_write = true;
 +  /* fall through */
case 3:
case 5:
case 7:
 -- 
 2.20.1
  
>>>
>>> Thanks
>>>
>>> Reviewed-by: Stephen Rothwell 
>>>
>>> This only shows up as a warning in a powerpc allyesconfig build.
>>>   
> 
> I will apply this to linux-next today (and keep it until it turns up in
> some other tree).
> 

If no one takes it by tomorrow, I'll take it in my tree.

Thanks!
--
Gustavo


Re: [PATCH] powerpc/kvm: Fall through switch case explicitly

2019-07-29 Thread Stephen Rothwell
Hi all,

On Mon, 29 Jul 2019 18:45:40 -0500 "Gustavo A. R. Silva" 
 wrote:
>
> On 7/29/19 3:16 AM, Stephen Rothwell wrote:
> > 
> > On Mon, 29 Jul 2019 11:25:36 +0530 Santosh Sivaraj  
> > wrote:  
> >>
> >> Implicit fallthrough warning was enabled globally which broke
> >> the build. Make it explicit with a `fall through` comment.
> >>
> >> Signed-off-by: Santosh Sivaraj   
> 
> Reviewed-by: Gustavo A. R. Silva 
> 
> Thanks!
> --
> Gustavo
> 
> >> ---
> >>  arch/powerpc/kvm/book3s_32_mmu.c | 1 +
> >>  1 file changed, 1 insertion(+)
> >>
> >> diff --git a/arch/powerpc/kvm/book3s_32_mmu.c 
> >> b/arch/powerpc/kvm/book3s_32_mmu.c
> >> index 653936177857..18f244aad7aa 100644
> >> --- a/arch/powerpc/kvm/book3s_32_mmu.c
> >> +++ b/arch/powerpc/kvm/book3s_32_mmu.c
> >> @@ -239,6 +239,7 @@ static int kvmppc_mmu_book3s_32_xlate_pte(struct 
> >> kvm_vcpu *vcpu, gva_t eaddr,
> >>case 2:
> >>case 6:
> >>pte->may_write = true;
> >> +  /* fall through */
> >>case 3:
> >>case 5:
> >>case 7:
> >> -- 
> >> 2.20.1
> >>  
> > 
> > Thanks
> > 
> > Reviewed-by: Stephen Rothwell 
> > 
> > This only shows up as a warning in a powerpc allyesconfig build.
> >   

I will apply this to linux-next today (and keep it until it turns up in
some other tree).

-- 
Cheers,
Stephen Rothwell


pgpLJv2H7Q2Aw.pgp
Description: OpenPGP digital signature


[PATCH 107/107] perf vendor events power9: Added missing event descriptions

2019-07-29 Thread Arnaldo Carvalho de Melo
From: Michael Petlan 

Documentation source:

https://wiki.raptorcs.com/w/images/6/6b/POWER9_PMU_UG_v12_28NOV2018_pub.pdf

Signed-off-by: Michael Petlan 
Reviewed-by: Madhavan Srinivasan 
Cc: Ananth N Mavinakayanahalli 
Cc: Carl Love 
Cc: Michael Ellerman 
Cc: Naveen N. Rao 
Cc: Paul Clarke 
Cc: Sukadev Bhattiprolu 
Cc: linuxppc-...@ozlabs.org
LPU-Reference: 20190719100837.7503-1-mpet...@redhat.com
Signed-off-by: Arnaldo Carvalho de Melo 
---
 tools/perf/pmu-events/arch/powerpc/power9/memory.json | 2 +-
 tools/perf/pmu-events/arch/powerpc/power9/other.json  | 8 
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/tools/perf/pmu-events/arch/powerpc/power9/memory.json 
b/tools/perf/pmu-events/arch/powerpc/power9/memory.json
index 2e2ebc700c74..c3bb283e37e9 100644
--- a/tools/perf/pmu-events/arch/powerpc/power9/memory.json
+++ b/tools/perf/pmu-events/arch/powerpc/power9/memory.json
@@ -52,7 +52,7 @@
   {,
 "EventCode": "0x4D02C",
 "EventName": "PM_PMC1_REWIND",
-"BriefDescription": ""
+"BriefDescription": "PMC1 rewind event"
   },
   {,
 "EventCode": "0x15158",
diff --git a/tools/perf/pmu-events/arch/powerpc/power9/other.json 
b/tools/perf/pmu-events/arch/powerpc/power9/other.json
index 48cf4f920b3f..62b864269623 100644
--- a/tools/perf/pmu-events/arch/powerpc/power9/other.json
+++ b/tools/perf/pmu-events/arch/powerpc/power9/other.json
@@ -237,7 +237,7 @@
   {,
 "EventCode": "0xD0B0",
 "EventName": "PM_HWSYNC",
-"BriefDescription": ""
+"BriefDescription": "A hwsync instruction was decoded and transferred"
   },
   {,
 "EventCode": "0x168B0",
@@ -1232,7 +1232,7 @@
   {,
 "EventCode": "0xD8AC",
 "EventName": "PM_LWSYNC",
-"BriefDescription": ""
+"BriefDescription": "An lwsync instruction was decoded and transferred"
   },
   {,
 "EventCode": "0x2094",
@@ -1747,7 +1747,7 @@
   {,
 "EventCode": "0xD8B0",
 "EventName": "PM_PTESYNC",
-"BriefDescription": ""
+"BriefDescription": "A ptesync instruction was counted when the 
instruction is decoded and transmitted"
   },
   {,
 "EventCode": "0x26086",
@@ -2107,7 +2107,7 @@
   {,
 "EventCode": "0xF080",
 "EventName": "PM_LSU_STCX_FAIL",
-"BriefDescription": ""
+"BriefDescription": "The LSU detects the condition that a stcx instruction 
failed. No requirement to wait for a response from the nest"
   },
   {,
 "EventCode": "0x30038",
-- 
2.21.0



[PATCH v2] Fix typo reigster to register

2019-07-29 Thread Pei-Hsuan Hung
Signed-off-by: Pei-Hsuan Hung 
Acked-by: Liviu Dudau 
Cc: triv...@kernel.org
---
Hi Liviu, thanks for your reply.
This patch is generated by a script so at first I didn't notice there is
also a typo in the word coefficient. I've fixed the typo in this
version.

 arch/powerpc/kernel/eeh.c   | 2 +-
 arch/powerpc/platforms/cell/spufs/switch.c  | 4 ++--
 drivers/extcon/extcon-rt8973a.c | 2 +-
 drivers/gpu/drm/arm/malidp_regs.h   | 2 +-
 drivers/net/wireless/realtek/rtlwifi/rtl8192se/fw.h | 2 +-
 drivers/scsi/lpfc/lpfc_hbadisc.c| 2 +-
 fs/userfaultfd.c| 2 +-
 7 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/kernel/eeh.c b/arch/powerpc/kernel/eeh.c
index c0e4b73191f3..d75c9c24ec4d 100644
--- a/arch/powerpc/kernel/eeh.c
+++ b/arch/powerpc/kernel/eeh.c
@@ -1030,7 +1030,7 @@ int __init eeh_ops_register(struct eeh_ops *ops)
 }
 
 /**
- * eeh_ops_unregister - Unreigster platform dependent EEH operations
+ * eeh_ops_unregister - Unregister platform dependent EEH operations
  * @name: name of EEH platform operations
  *
  * Unregister the platform dependent EEH operation callback
diff --git a/arch/powerpc/platforms/cell/spufs/switch.c 
b/arch/powerpc/platforms/cell/spufs/switch.c
index 5c3f5d088c3b..9548a086937b 100644
--- a/arch/powerpc/platforms/cell/spufs/switch.c
+++ b/arch/powerpc/platforms/cell/spufs/switch.c
@@ -574,7 +574,7 @@ static inline void save_mfc_rag(struct spu_state *csa, 
struct spu *spu)
 {
/* Save, Step 38:
 * Save RA_GROUP_ID register and the
-* RA_ENABLE reigster in the CSA.
+* RA_ENABLE register in the CSA.
 */
csa->priv1.resource_allocation_groupID_RW =
spu_resource_allocation_groupID_get(spu);
@@ -1227,7 +1227,7 @@ static inline void restore_mfc_rag(struct spu_state *csa, 
struct spu *spu)
 {
/* Restore, Step 29:
 * Restore RA_GROUP_ID register and the
-* RA_ENABLE reigster from the CSA.
+* RA_ENABLE register from the CSA.
 */
spu_resource_allocation_groupID_set(spu,
csa->priv1.resource_allocation_groupID_RW);
diff --git a/drivers/extcon/extcon-rt8973a.c b/drivers/extcon/extcon-rt8973a.c
index 40c07f4d656e..e75c03792398 100644
--- a/drivers/extcon/extcon-rt8973a.c
+++ b/drivers/extcon/extcon-rt8973a.c
@@ -270,7 +270,7 @@ static int rt8973a_muic_get_cable_type(struct 
rt8973a_muic_info *info)
}
cable_type = adc & RT8973A_REG_ADC_MASK;
 
-   /* Read Device 1 reigster to identify correct cable type */
+   /* Read Device 1 register to identify correct cable type */
ret = regmap_read(info->regmap, RT8973A_REG_DEV1, );
if (ret) {
dev_err(info->dev, "failed to read DEV1 register\n");
diff --git a/drivers/gpu/drm/arm/malidp_regs.h 
b/drivers/gpu/drm/arm/malidp_regs.h
index 993031542fa1..9b4f95d8ccec 100644
--- a/drivers/gpu/drm/arm/malidp_regs.h
+++ b/drivers/gpu/drm/arm/malidp_regs.h
@@ -145,7 +145,7 @@
 #define MALIDP_SE_COEFFTAB_DATA_MASK   0x3fff
 #define MALIDP_SE_SET_COEFFTAB_DATA(x) \
((x) & MALIDP_SE_COEFFTAB_DATA_MASK)
-/* Enhance coeffents reigster offset */
+/* Enhance coefficients register offset */
 #define MALIDP_SE_IMAGE_ENH0x3C
 /* ENH_LIMITS offset 0x0 */
 #define MALIDP_SE_ENH_LOW_LEVEL24
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192se/fw.h 
b/drivers/net/wireless/realtek/rtlwifi/rtl8192se/fw.h
index 99c6f7eefd85..d03c8f12a15c 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192se/fw.h
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192se/fw.h
@@ -58,7 +58,7 @@ struct fw_priv {
/* 0x81: PCI-AP, 01:PCIe, 02: 92S-U,
 * 0x82: USB-AP, 0x12: 72S-U, 03:SDIO */
u8 hci_sel;
-   /* the same value as reigster value  */
+   /* the same value as register value  */
u8 chip_version;
/* customer  ID low byte */
u8 customer_id_0;
diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c
index 28ecaa7fc715..42b125602d72 100644
--- a/drivers/scsi/lpfc/lpfc_hbadisc.c
+++ b/drivers/scsi/lpfc/lpfc_hbadisc.c
@@ -6551,7 +6551,7 @@ lpfc_sli4_unregister_fcf(struct lpfc_hba *phba)
  * lpfc_unregister_fcf_rescan - Unregister currently registered fcf and rescan
  * @phba: Pointer to hba context object.
  *
- * This function unregisters the currently reigstered FCF. This function
+ * This function unregisters the currently registered FCF. This function
  * also tries to find another FCF for discovery by rescan the HBA FCF table.
  */
 void
diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
index ccbdbd62f0d8..612dc1240f90 100644
--- a/fs/userfaultfd.c
+++ b/fs/userfaultfd.c
@@ -267,7 +267,7 @@ static inline bool userfaultfd_huge_must_wait(struct 
userfaultfd_ctx *ctx,
 #endif /* CONFIG_HUGETLB_PAGE 

Re: [RFC PATCH 02/10] powerpc: move memstart_addr and kernstart_addr to init-common.c

2019-07-29 Thread Jason Yan




On 2019/7/29 22:31, Christoph Hellwig wrote:

I think you need to keep the more restrictive EXPORT_SYMBOL_GPL from
the 64-bit code to keep the intention of all authors intact.



Oh yes, I will fix in v2. Thanks.


.





Re: [PATCH] powerpc/kvm: Fall through switch case explicitly

2019-07-29 Thread Gustavo A. R. Silva



On 7/29/19 3:16 AM, Stephen Rothwell wrote:
> Hi Santosh,
> 
> On Mon, 29 Jul 2019 11:25:36 +0530 Santosh Sivaraj  wrote:
>>
>> Implicit fallthrough warning was enabled globally which broke
>> the build. Make it explicit with a `fall through` comment.
>>
>> Signed-off-by: Santosh Sivaraj 

Reviewed-by: Gustavo A. R. Silva 

Thanks!
--
Gustavo

>> ---
>>  arch/powerpc/kvm/book3s_32_mmu.c | 1 +
>>  1 file changed, 1 insertion(+)
>>
>> diff --git a/arch/powerpc/kvm/book3s_32_mmu.c 
>> b/arch/powerpc/kvm/book3s_32_mmu.c
>> index 653936177857..18f244aad7aa 100644
>> --- a/arch/powerpc/kvm/book3s_32_mmu.c
>> +++ b/arch/powerpc/kvm/book3s_32_mmu.c
>> @@ -239,6 +239,7 @@ static int kvmppc_mmu_book3s_32_xlate_pte(struct 
>> kvm_vcpu *vcpu, gva_t eaddr,
>>  case 2:
>>  case 6:
>>  pte->may_write = true;
>> +/* fall through */
>>  case 3:
>>  case 5:
>>  case 7:
>> -- 
>> 2.20.1
>>
> 
> Thanks
> 
> Reviewed-by: Stephen Rothwell 
> 
> This only shows up as a warning in a powerpc allyesconfig build.
> 


[Bug 204371] BUG kmalloc-4k (Tainted: G W ): Object padding overwritten

2019-07-29 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=204371

Erhard F. (erhar...@mailbox.org) changed:

   What|Removed |Added

 CC||linuxppc-dev@lists.ozlabs.o
   ||rg

-- 
You are receiving this mail because:
You are on the CC list for the bug.

[Bug 204375] kernel 5.2.4 w. KASAN enabled fails to boot on a PowerMac G4 3,6 at very early stage

2019-07-29 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=204375

--- Comment #1 from Erhard F. (erhar...@mailbox.org) ---
Created attachment 284039
  --> https://bugzilla.kernel.org/attachment.cgi?id=284039=edit
kernel .config (PowerMac G4 DP, kernel 5.2.4)

With this .config the G4 DP boots fine.
With this .config + KASAN enabled the G4 DP fails to boot.

-- 
You are receiving this mail because:
You are watching the assignee of the bug.

[Bug 204375] New: kernel 5.2.4 w. KASAN enabled fails to boot on a PowerMac G4 3,6 at very early stage

2019-07-29 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=204375

Bug ID: 204375
   Summary: kernel 5.2.4 w. KASAN enabled fails to boot on a
PowerMac G4 3,6 at very early stage
   Product: Platform Specific/Hardware
   Version: 2.5
Kernel Version: 5.2.4
  Hardware: PPC-32
OS: Linux
  Tree: Mainline
Status: NEW
  Severity: normal
  Priority: P1
 Component: PPC-32
  Assignee: platform_ppc...@kernel-bugs.osdl.org
  Reporter: erhar...@mailbox.org
Regression: No

I wanted to give KASAN a try to uncover even more bugs, but as it turns out a
KASAN enabled kernel does not boot at all on my PowerMac G4 MDD. ;)

Tried kernels 5.2.4 and 5.3-rc2, both without success. I can't give a dmesg as
the boot process stops/stalls very early, only showing two lines on an
OpenFirmware console (white background, black letters):

done
found display   : /pco@f000/ATY,AlteracParent@10/ATY,Alterac_B@1,
opening...

-- 
You are receiving this mail because:
You are watching the assignee of the bug.

Re: [PATCH] powerpc: workaround clang codegen bug in dcbz

2019-07-29 Thread Segher Boessenkool
On Mon, Jul 29, 2019 at 01:32:46PM -0700, Nathan Chancellor wrote:
> For the record:
> 
> https://godbolt.org/z/z57VU7
> 
> This seems consistent with what Michael found so I don't think a revert
> is entirely unreasonable.

Try this:

  https://godbolt.org/z/6_ZfVi

This matters in non-trivial loops, for example.  But all current cases
where such non-trivial loops are done with cache block instructions are
actually written in real assembler already, using two registers.
Because performance matters.  Not that I recommend writing code as
critical as memset in C with inline asm :-)


Segher


Re: [PATCH] powerpc: workaround clang codegen bug in dcbz

2019-07-29 Thread Nick Desaulniers
On Mon, Jul 29, 2019 at 1:47 PM Nathan Chancellor
 wrote:
>
> On Mon, Jul 29, 2019 at 01:45:35PM -0700, Nick Desaulniers wrote:
> > On Mon, Jul 29, 2019 at 1:32 PM Nathan Chancellor
> >  wrote:
> > >
> > > On Mon, Jul 29, 2019 at 01:25:41PM -0700, Nick Desaulniers wrote:
> > > > But I'm not sure how the inlined code generated would be affected.
> > >
> > > For the record:
> > >
> > > https://godbolt.org/z/z57VU7
> > >
> > > This seems consistent with what Michael found so I don't think a revert
> > > is entirely unreasonable.
> >
> > Thanks for debugging/reporting/testing and the Godbolt link which
> > clearly shows that the codegen for out of line versions is no
> > different.  The case I can't comment on is what happens when those
> > `static inline` functions get inlined (maybe the original patch
> > improves those cases?).
> > --
> > Thanks,
> > ~Nick Desaulniers
>
> I'll try to build with various versions of GCC and compare the
> disassembly of the one problematic location that I found and see
> what it looks like.

Also, guess I should have included the tag:
Fixes: 6c5875843b87 ("powerpc: slightly improve cache helpers")
-- 
Thanks,
~Nick Desaulniers


Re: [PATCH] powerpc: workaround clang codegen bug in dcbz

2019-07-29 Thread Nathan Chancellor
On Mon, Jul 29, 2019 at 01:45:35PM -0700, Nick Desaulniers wrote:
> On Mon, Jul 29, 2019 at 1:32 PM Nathan Chancellor
>  wrote:
> >
> > On Mon, Jul 29, 2019 at 01:25:41PM -0700, Nick Desaulniers wrote:
> > > But I'm not sure how the inlined code generated would be affected.
> >
> > For the record:
> >
> > https://godbolt.org/z/z57VU7
> >
> > This seems consistent with what Michael found so I don't think a revert
> > is entirely unreasonable.
> 
> Thanks for debugging/reporting/testing and the Godbolt link which
> clearly shows that the codegen for out of line versions is no
> different.  The case I can't comment on is what happens when those
> `static inline` functions get inlined (maybe the original patch
> improves those cases?).
> -- 
> Thanks,
> ~Nick Desaulniers

I'll try to build with various versions of GCC and compare the
disassembly of the one problematic location that I found and see
what it looks like.

Cheers,
Nathan


Re: [PATCH] powerpc: workaround clang codegen bug in dcbz

2019-07-29 Thread Nathan Chancellor
On Mon, Jul 29, 2019 at 01:25:41PM -0700, Nick Desaulniers wrote:
> Commit 6c5875843b87 ("powerpc: slightly improve cache helpers") exposed
> what looks like a codegen bug in Clang's handling of `%y` output
> template with `Z` constraint. This is resulting in panics during boot
> for 32b powerpc builds w/ Clang, as reported by our CI.
> 
> Add back the original code that worked behind a preprocessor check for
> __clang__ until we can fix LLVM.
> 
> Further, it seems that clang allnoconfig builds are unhappy with `Z`, as
> reported by 0day bot. This is likely because Clang warns about inline
> asm constraints when the constraint requires inlining to be semantically
> valid.
> 
> Link: https://bugs.llvm.org/show_bug.cgi?id=42762
> Link: https://github.com/ClangBuiltLinux/linux/issues/593
> Link: 
> https://lore.kernel.org/lkml/20190721075846.GA97701@archlinux-threadripper/
> Debugged-by: Nathan Chancellor 
> Reported-by: Nathan Chancellor 
> Reported-by: kbuild test robot 
> Suggested-by: Nathan Chancellor 
> Signed-off-by: Nick Desaulniers 
> ---
> Alternatively, we could just revert 6c5875843b87. It seems that GCC
> generates the same code for these functions for out of line versions.
> But I'm not sure how the inlined code generated would be affected.

For the record:

https://godbolt.org/z/z57VU7

This seems consistent with what Michael found so I don't think a revert
is entirely unreasonable.

Either way:

Reviewed-by: Nathan Chancellor 


Re: [PATCH] scsi: ibmvfc: Mark expected switch fall-throughs

2019-07-29 Thread Kees Cook
On Sun, Jul 28, 2019 at 07:26:08PM -0500, Gustavo A. R. Silva wrote:
> Mark switch cases where we are expecting to fall through.
> 
> This patch fixes the following warnings:
> 
> drivers/scsi/ibmvscsi/ibmvfc.c: In function 'ibmvfc_npiv_login_done':
> drivers/scsi/ibmvscsi/ibmvfc.c:4022:3: warning: this statement may fall 
> through [-Wimplicit-fallthrough=]
>ibmvfc_retry_host_init(vhost);
>^
> drivers/scsi/ibmvscsi/ibmvfc.c:4023:2: note: here
>   case IBMVFC_MAD_DRIVER_FAILED:
>   ^~~~
> drivers/scsi/ibmvscsi/ibmvfc.c: In function 'ibmvfc_bsg_request':
> drivers/scsi/ibmvscsi/ibmvfc.c:1830:11: warning: this statement may fall 
> through [-Wimplicit-fallthrough=]
>port_id = (bsg_request->rqst_data.h_els.port_id[0] << 16) |
>^~~
> (bsg_request->rqst_data.h_els.port_id[1] << 8) |
> 
> bsg_request->rqst_data.h_els.port_id[2];
> ~~~
> drivers/scsi/ibmvscsi/ibmvfc.c:1833:2: note: here
>   case FC_BSG_RPT_ELS:
>   ^~~~
> drivers/scsi/ibmvscsi/ibmvfc.c:1838:11: warning: this statement may fall 
> through [-Wimplicit-fallthrough=]
>port_id = (bsg_request->rqst_data.h_ct.port_id[0] << 16) |
>^~
> (bsg_request->rqst_data.h_ct.port_id[1] << 8) |
> ~~~
> bsg_request->rqst_data.h_ct.port_id[2];
> ~~
> drivers/scsi/ibmvscsi/ibmvfc.c:1841:2: note: here
>   case FC_BSG_RPT_CT:
>   ^~~~
> 
> Reported-by: Stephen Rothwell 
> Signed-off-by: Gustavo A. R. Silva 

Reviewed-by: Kees Cook 

-Kees

> ---
>  drivers/scsi/ibmvscsi/ibmvfc.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c
> index 8cdbac076a1b..df897df5cafe 100644
> --- a/drivers/scsi/ibmvscsi/ibmvfc.c
> +++ b/drivers/scsi/ibmvscsi/ibmvfc.c
> @@ -1830,6 +1830,7 @@ static int ibmvfc_bsg_request(struct bsg_job *job)
>   port_id = (bsg_request->rqst_data.h_els.port_id[0] << 16) |
>   (bsg_request->rqst_data.h_els.port_id[1] << 8) |
>   bsg_request->rqst_data.h_els.port_id[2];
> + /* fall through */
>   case FC_BSG_RPT_ELS:
>   fc_flags = IBMVFC_FC_ELS;
>   break;
> @@ -1838,6 +1839,7 @@ static int ibmvfc_bsg_request(struct bsg_job *job)
>   port_id = (bsg_request->rqst_data.h_ct.port_id[0] << 16) |
>   (bsg_request->rqst_data.h_ct.port_id[1] << 8) |
>   bsg_request->rqst_data.h_ct.port_id[2];
> + /* fall through */
>   case FC_BSG_RPT_CT:
>   fc_flags = IBMVFC_FC_CT_IU;
>   break;
> @@ -4020,6 +4022,7 @@ static void ibmvfc_npiv_login_done(struct ibmvfc_event 
> *evt)
>   return;
>   case IBMVFC_MAD_CRQ_ERROR:
>   ibmvfc_retry_host_init(vhost);
> + /* fall through */
>   case IBMVFC_MAD_DRIVER_FAILED:
>   ibmvfc_free_event(evt);
>   return;
> -- 
> 2.22.0
> 

-- 
Kees Cook


Re: [RFC PATCH 03/10] powerpc: introduce kimage_vaddr to store the kernel base

2019-07-29 Thread Christoph Hellwig
On Wed, Jul 17, 2019 at 04:06:14PM +0800, Jason Yan wrote:
> Now the kernel base is a fixed value - KERNELBASE. To support KASLR, we
> need a variable to store the kernel base.

This should probably merged into the patch actually using it.


Re: [RFC PATCH 02/10] powerpc: move memstart_addr and kernstart_addr to init-common.c

2019-07-29 Thread Christoph Hellwig
I think you need to keep the more restrictive EXPORT_SYMBOL_GPL from
the 64-bit code to keep the intention of all authors intact.


Re: [RFC PATCH 00/10] implement KASLR for powerpc/fsl_booke/32

2019-07-29 Thread Diana Madalina Craciun
Reviewed-by: Diana Craciun 
Tested-by: Diana Craciun 


On 7/17/2019 10:49 AM, Jason Yan wrote:
> This series implements KASLR for powerpc/fsl_booke/32, as a security
> feature that deters exploit attempts relying on knowledge of the location
> of kernel internals.
>
> Since CONFIG_RELOCATABLE has already supported, what we need to do is
> map or copy kernel to a proper place and relocate. Freescale Book-E
> parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
> entries are not suitable to map the kernel directly in a randomized
> region, so we chose to copy the kernel to a proper place and restart to
> relocate.
>
> Entropy is derived from the banner and timer base, which will change every
> build and boot. This not so much safe so additionally the bootloader may
> pass entropy via the /chosen/kaslr-seed node in device tree.
>
> We will use the first 512M of the low memory to randomize the kernel
> image. The memory will be split in 64M zones. We will use the lower 8
> bit of the entropy to decide the index of the 64M zone. Then we chose a
> 16K aligned offset inside the 64M zone to put the kernel in.
>
> KERNELBASE
>
> |-->   64M   <--|
> |   |
> +---+++---+
> |   |||kernel||   |
> +---+++---+
> | |
> |->   offset<-|
>
>   kimage_vaddr
>
> We also check if we will overlap with some areas like the dtb area, the
> initrd area or the crashkernel area. If we cannot find a proper area,
> kaslr will be disabled and boot from the original kernel.
>
> Jason Yan (10):
>   powerpc: unify definition of M_IF_NEEDED
>   powerpc: move memstart_addr and kernstart_addr to init-common.c
>   powerpc: introduce kimage_vaddr to store the kernel base
>   powerpc/fsl_booke/32: introduce create_tlb_entry() helper
>   powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper
>   powerpc/fsl_booke/32: implement KASLR infrastructure
>   powerpc/fsl_booke/32: randomize the kernel image offset
>   powerpc/fsl_booke/kaslr: clear the original kernel if randomized
>   powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter
>   powerpc/fsl_booke/kaslr: dump out kernel offset information on panic
>
>  arch/powerpc/Kconfig  |  11 +
>  arch/powerpc/include/asm/nohash/mmu-book3e.h  |  10 +
>  arch/powerpc/include/asm/page.h   |   7 +
>  arch/powerpc/kernel/Makefile  |   1 +
>  arch/powerpc/kernel/early_32.c|   2 +-
>  arch/powerpc/kernel/exceptions-64e.S  |  10 -
>  arch/powerpc/kernel/fsl_booke_entry_mapping.S |  23 +-
>  arch/powerpc/kernel/head_fsl_booke.S  |  61 ++-
>  arch/powerpc/kernel/kaslr_booke.c | 439 ++
>  arch/powerpc/kernel/machine_kexec.c   |   1 +
>  arch/powerpc/kernel/misc_64.S |   5 -
>  arch/powerpc/kernel/setup-common.c|  23 +
>  arch/powerpc/mm/init-common.c |   7 +
>  arch/powerpc/mm/init_32.c |   5 -
>  arch/powerpc/mm/init_64.c |   5 -
>  arch/powerpc/mm/mmu_decl.h|  10 +
>  arch/powerpc/mm/nohash/fsl_booke.c|   8 +-
>  17 files changed, 580 insertions(+), 48 deletions(-)
>  create mode 100644 arch/powerpc/kernel/kaslr_booke.c
>



[PATCH 18/18] powerpc/64s/exception: program check handler do not branch into a macro

2019-07-29 Thread Nicholas Piggin
It's a bit too clever to jump to a label inside an expanded macro,
particularly when the label is just a number rather than a descriptive
name.

So expand interrupt handler code twice, for the stack and no stack
cases, and branch to those. The slight code size increase is worth
the improved clarity of branches for this non-performance critical
code.

Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/kernel/exceptions-64s.S | 15 ---
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/kernel/exceptions-64s.S 
b/arch/powerpc/kernel/exceptions-64s.S
index 0ee8c4a744c9..69f71c8759c5 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -533,11 +533,10 @@ 
END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948)
andi.   r10,r12,MSR_PR  /* See if coming from user  */
mr  r10,r1  /* Save r1  */
subir1,r1,INT_FRAME_SIZE/* alloc frame on kernel stack  */
-   beq-1f
+   beq-100f
ld  r1,PACAKSAVE(r13)   /* kernel stack to use  */
-1: tdgei   r1,-INT_FRAME_SIZE  /* trap if r1 is in userspace   */
-   EMIT_BUG_ENTRY 1b,__FILE__,__LINE__,0
-3:
+100:   tdgei   r1,-INT_FRAME_SIZE  /* trap if r1 is in userspace   */
+   EMIT_BUG_ENTRY 100b,__FILE__,__LINE__,0
.endif
 
std r9,_CCR(r1) /* save CR in stackframe*/
@@ -551,10 +550,10 @@ 
END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948)
.if \kaup
kuap_save_amr_and_lock r9, r10, cr1, cr0
.endif
-   beq 4f  /* if from kernel mode  */
+   beq 101f/* if from kernel mode  */
ACCOUNT_CPU_USER_ENTRY(r13, r9, r10)
SAVE_PPR(\area, r9)
-4:
+101:
.else
.if \kaup
kuap_save_amr_and_lock r9, r10, cr1
@@ -1325,9 +1324,11 @@ EXC_COMMON_BEGIN(program_check_common)
mr  r10,r1  /* Save r1  */
ld  r1,PACAEMERGSP(r13) /* Use emergency stack  */
subir1,r1,INT_FRAME_SIZE/* alloc stack frame*/
-   b 3f/* Jump into the macro !!   */
+   INT_COMMON 0x700, PACA_EXGEN, 0, 1, 1, 0, 0
+   b 3f
 2:
INT_COMMON 0x700, PACA_EXGEN, 1, 1, 1, 0, 0
+3:
bl  save_nvgprs
addir3,r1,STACK_FRAME_OVERHEAD
bl  program_check_exception
-- 
2.22.0



[PATCH 17/18] powerpc/64s/exception: move interrupt entry code above the common handler

2019-07-29 Thread Nicholas Piggin
This better reflects the order in which the code is executed.

No generated code change except BUG line number constants.

Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/kernel/exceptions-64s.S | 450 +--
 1 file changed, 225 insertions(+), 225 deletions(-)

diff --git a/arch/powerpc/kernel/exceptions-64s.S 
b/arch/powerpc/kernel/exceptions-64s.S
index 456926439e41..0ee8c4a744c9 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -180,101 +180,6 @@ BEGIN_FTR_SECTION_NESTED(943) 
\
std ra,offset(r13); \
 END_FTR_SECTION_NESTED(ftr,ftr,943)
 
-.macro INT_SAVE_SRR_AND_JUMP label, hsrr, set_ri
-   ld  r10,PACAKMSR(r13)   /* get MSR value for kernel */
-   .if ! \set_ri
-   xorir10,r10,MSR_RI  /* Clear MSR_RI */
-   .endif
-   .if \hsrr == EXC_HV_OR_STD
-   BEGIN_FTR_SECTION
-   mfspr   r11,SPRN_HSRR0  /* save HSRR0 */
-   mfspr   r12,SPRN_HSRR1  /* and HSRR1 */
-   mtspr   SPRN_HSRR1,r10
-   FTR_SECTION_ELSE
-   mfspr   r11,SPRN_SRR0   /* save SRR0 */
-   mfspr   r12,SPRN_SRR1   /* and SRR1 */
-   mtspr   SPRN_SRR1,r10
-   ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206)
-   .elseif \hsrr
-   mfspr   r11,SPRN_HSRR0  /* save HSRR0 */
-   mfspr   r12,SPRN_HSRR1  /* and HSRR1 */
-   mtspr   SPRN_HSRR1,r10
-   .else
-   mfspr   r11,SPRN_SRR0   /* save SRR0 */
-   mfspr   r12,SPRN_SRR1   /* and SRR1 */
-   mtspr   SPRN_SRR1,r10
-   .endif
-   LOAD_HANDLER(r10, \label\())
-   .if \hsrr == EXC_HV_OR_STD
-   BEGIN_FTR_SECTION
-   mtspr   SPRN_HSRR0,r10
-   HRFI_TO_KERNEL
-   FTR_SECTION_ELSE
-   mtspr   SPRN_SRR0,r10
-   RFI_TO_KERNEL
-   ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206)
-   .elseif \hsrr
-   mtspr   SPRN_HSRR0,r10
-   HRFI_TO_KERNEL
-   .else
-   mtspr   SPRN_SRR0,r10
-   RFI_TO_KERNEL
-   .endif
-   b   .   /* prevent speculative execution */
-.endm
-
-/* INT_SAVE_SRR_AND_JUMP works for real or virt, this is faster but virt only 
*/
-.macro INT_VIRT_SAVE_SRR_AND_JUMP label, hsrr
-#ifdef CONFIG_RELOCATABLE
-   .if \hsrr == EXC_HV_OR_STD
-   BEGIN_FTR_SECTION
-   mfspr   r11,SPRN_HSRR0  /* save HSRR0 */
-   FTR_SECTION_ELSE
-   mfspr   r11,SPRN_SRR0   /* save SRR0 */
-   ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206)
-   .elseif \hsrr
-   mfspr   r11,SPRN_HSRR0  /* save HSRR0 */
-   .else
-   mfspr   r11,SPRN_SRR0   /* save SRR0 */
-   .endif
-   LOAD_HANDLER(r12, \label\())
-   mtctr   r12
-   .if \hsrr == EXC_HV_OR_STD
-   BEGIN_FTR_SECTION
-   mfspr   r12,SPRN_HSRR1  /* and HSRR1 */
-   FTR_SECTION_ELSE
-   mfspr   r12,SPRN_SRR1   /* and HSRR1 */
-   ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206)
-   .elseif \hsrr
-   mfspr   r12,SPRN_HSRR1  /* and HSRR1 */
-   .else
-   mfspr   r12,SPRN_SRR1   /* and HSRR1 */
-   .endif
-   li  r10,MSR_RI
-   mtmsrd  r10,1   /* Set RI (EE=0) */
-   bctr
-#else
-   .if \hsrr == EXC_HV_OR_STD
-   BEGIN_FTR_SECTION
-   mfspr   r11,SPRN_HSRR0  /* save HSRR0 */
-   mfspr   r12,SPRN_HSRR1  /* and HSRR1 */
-   FTR_SECTION_ELSE
-   mfspr   r11,SPRN_SRR0   /* save SRR0 */
-   mfspr   r12,SPRN_SRR1   /* and SRR1 */
-   ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206)
-   .elseif \hsrr
-   mfspr   r11,SPRN_HSRR0  /* save HSRR0 */
-   mfspr   r12,SPRN_HSRR1  /* and HSRR1 */
-   .else
-   mfspr   r11,SPRN_SRR0   /* save SRR0 */
-   mfspr   r12,SPRN_SRR1   /* and SRR1 */
-   .endif
-   li  r10,MSR_RI
-   mtmsrd  r10,1   /* Set RI (EE=0) */
-   b   \label
-#endif
-.endm
-
 /*
  * Branch to label using its 0xC000 address. This results in instruction
  * address suitable for MSR[IR]=0 or 1, which allows relocation to be turned
@@ -288,6 +193,15 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
mtctr   reg;\
bctr
 
+.macro INT_KVM_HANDLER vec, hsrr, area, skip
+   .if \hsrr
+   TRAMP_KVM_BEGIN(do_kvm_H\vec\())
+   .else
+   TRAMP_KVM_BEGIN(do_kvm_\vec\())
+   .endif
+   KVM_HANDLER \vec, \hsrr, \area, \skip
+.endm
+
 #ifdef CONFIG_KVM_BOOK3S_64_HANDLER
 #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
 /*
@@ -390,6 +304,222 @@ 
END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948)
 .endm
 #endif
 
+.macro INT_SAVE_SRR_AND_JUMP label, hsrr, set_ri
+   ld  r10,PACAKMSR(r13)   /* get MSR value for kernel */
+   .if ! \set_ri
+   xori

[PATCH 16/18] powerpc/64s/exception: INT_COMMON add DIR, DSISR, reconcile options

2019-07-29 Thread Nicholas Piggin
Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/kernel/exceptions-64s.S | 111 ---
 1 file changed, 51 insertions(+), 60 deletions(-)

diff --git a/arch/powerpc/kernel/exceptions-64s.S 
b/arch/powerpc/kernel/exceptions-64s.S
index dcb60f082fdc..456926439e41 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -398,7 +398,7 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948)
  * If stack=0, then the stack is already set in r1, and r1 is saved in r10.
  * PPR save and CPU accounting is not done for the !stack case (XXX why not?)
  */
-.macro INT_COMMON vec, area, stack, kaup
+.macro INT_COMMON vec, area, stack, kaup, reconcile, dar, dsisr
.if \stack
andi.   r10,r12,MSR_PR  /* See if coming from user  */
mr  r10,r1  /* Save r1  */
@@ -442,6 +442,24 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948)
std r9,GPR11(r1)
std r10,GPR12(r1)
std r11,GPR13(r1)
+   .if \dar
+   .if \dar == 2
+   ld  r10,_NIP(r1)
+   .else
+   ld  r10,\area+EX_DAR(r13)
+   .endif
+   std r10,_DAR(r1)
+   .endif
+   .if \dsisr
+   .if \dsisr == 2
+   ld  r10,_MSR(r1)
+   lis r11,DSISR_SRR1_MATCH_64S@h
+   and r10,r10,r11
+   .else
+   lwz r10,\area+EX_DSISR(r13)
+   .endif
+   std r10,_DSISR(r1)
+   .endif
 BEGIN_FTR_SECTION_NESTED(66)
ld  r10,\area+EX_CFAR(r13)
std r10,ORIG_GPR3(r1)
@@ -468,6 +486,10 @@ END_FTR_SECTION_NESTED(CPU_FTR_CFAR, CPU_FTR_CFAR, 66)
.if \stack
ACCOUNT_STOLEN_TIME
.endif
+
+   .if \reconcile
+   RECONCILE_IRQ_STATE(r10, r11)
+   .endif
 .endm
 
 /*
@@ -665,9 +687,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_CAN_NAP)
 
 #define EXC_COMMON(name, realvec, hdlr)
\
EXC_COMMON_BEGIN(name); \
-   INT_COMMON realvec, PACA_EXGEN, 1, 1 ;  \
+   INT_COMMON realvec, PACA_EXGEN, 1, 1, 1, 0, 0 ; \
bl  save_nvgprs;\
-   RECONCILE_IRQ_STATE(r10, r11);  \
addir3,r1,STACK_FRAME_OVERHEAD; \
bl  hdlr;   \
b   ret_from_except
@@ -678,9 +699,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_CAN_NAP)
  */
 #define EXC_COMMON_ASYNC(name, realvec, hdlr)  \
EXC_COMMON_BEGIN(name); \
-   INT_COMMON realvec, PACA_EXGEN, 1, 1 ;  \
+   INT_COMMON realvec, PACA_EXGEN, 1, 1, 1, 0, 0 ; \
FINISH_NAP; \
-   RECONCILE_IRQ_STATE(r10, r11);  \
RUNLATCH_ON;\
addir3,r1,STACK_FRAME_OVERHEAD; \
bl  hdlr;   \
@@ -859,7 +879,7 @@ EXC_COMMON_BEGIN(system_reset_common)
mr  r10,r1
ld  r1,PACA_NMI_EMERG_SP(r13)
subir1,r1,INT_FRAME_SIZE
-   INT_COMMON 0x100, PACA_EXNMI, 0, 1
+   INT_COMMON 0x100, PACA_EXNMI, 0, 1, 0, 0, 0
bl  save_nvgprs
/*
 * Set IRQS_ALL_DISABLED unconditionally so arch_irqs_disabled does
@@ -970,12 +990,7 @@ EXC_COMMON_BEGIN(machine_check_early_common)
subir1,r1,INT_FRAME_SIZE/* alloc stack frame */
 
/* We don't touch AMR here, we never go to virtual mode */
-   INT_COMMON 0x200, PACA_EXMC, 0, 0
-
-   ld  r3,PACA_EXMC+EX_DAR(r13)
-   lwz r4,PACA_EXMC+EX_DSISR(r13)
-   std r3,_DAR(r1)
-   std r4,_DSISR(r1)
+   INT_COMMON 0x200, PACA_EXMC, 0, 0, 0, 1, 1
 
 BEGIN_FTR_SECTION
bl  enable_machine_check
@@ -1070,16 +1085,11 @@ EXC_COMMON_BEGIN(machine_check_common)
 * Machine check is different because we use a different
 * save area: PACA_EXMC instead of PACA_EXGEN.
 */
-   INT_COMMON 0x200, PACA_EXMC, 1, 1
+   INT_COMMON 0x200, PACA_EXMC, 1, 1, 1, 1, 1
FINISH_NAP
-   RECONCILE_IRQ_STATE(r10, r11)
-   ld  r3,PACA_EXMC+EX_DAR(r13)
-   lwz r4,PACA_EXMC+EX_DSISR(r13)
/* Enable MSR_RI when finished with PACA_EXMC */
li  r10,MSR_RI
mtmsrd  r10,1
-   std r3,_DAR(r1)
-   std r4,_DSISR(r1)
bl  save_nvgprs
addir3,r1,STACK_FRAME_OVERHEAD
bl  machine_check_exception
@@ -1162,14 +1172,11 @@ EXC_COMMON_BEGIN(data_access_common)
 * r9 - r13 are saved in paca->exgen.
 * EX_DAR and EX_DSISR have saved DAR/DSISR
 */
-   

[PATCH 15/18] powerpc/64s/exception: Expand EXCEPTION_PROLOG_COMMON_1 and 2 into caller

2019-07-29 Thread Nicholas Piggin
No generated code change except BUG line number constants.

Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/kernel/exceptions-64s.S | 85 +---
 1 file changed, 40 insertions(+), 45 deletions(-)

diff --git a/arch/powerpc/kernel/exceptions-64s.S 
b/arch/powerpc/kernel/exceptions-64s.S
index ff949d6139d3..dcb60f082fdc 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -390,49 +390,6 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948)
 .endm
 #endif
 
-#define EXCEPTION_PROLOG_COMMON_1()   \
-   std r9,_CCR(r1);/* save CR in stackframe*/ \
-   std r11,_NIP(r1);   /* save SRR0 in stackframe  */ \
-   std r12,_MSR(r1);   /* save SRR1 in stackframe  */ \
-   std r10,0(r1);  /* make stack chain pointer */ \
-   std r0,GPR0(r1);/* save r0 in stackframe*/ \
-   std r10,GPR1(r1);   /* save r1 in stackframe*/ \
-
-/* Save original regs values from save area to stack frame. */
-#define EXCEPTION_PROLOG_COMMON_2(area, trap) \
-   ld  r9,area+EX_R9(r13); /* move r9, r10 to stackframe   */ \
-   ld  r10,area+EX_R10(r13);  \
-   std r9,GPR9(r1);   \
-   std r10,GPR10(r1); \
-   ld  r9,area+EX_R11(r13);/* move r11 - r13 to stackframe */ \
-   ld  r10,area+EX_R12(r13);  \
-   ld  r11,area+EX_R13(r13);  \
-   std r9,GPR11(r1);  \
-   std r10,GPR12(r1); \
-   std r11,GPR13(r1); \
-BEGIN_FTR_SECTION_NESTED(66); \
-   ld  r10,area+EX_CFAR(r13); \
-   std r10,ORIG_GPR3(r1); \
-END_FTR_SECTION_NESTED(CPU_FTR_CFAR, CPU_FTR_CFAR, 66);
   \
-   GET_CTR(r10, area);\
-   std r10,_CTR(r1);  \
-   std r2,GPR2(r1);/* save r2 in stackframe*/ \
-   SAVE_4GPRS(3, r1);  /* save r3 - r6 in stackframe   */ \
-   SAVE_2GPRS(7, r1);  /* save r7, r8 in stackframe*/ \
-   mflrr9; /* Get LR, later save to stack  */ \
-   ld  r2,PACATOC(r13);/* get kernel TOC into r2   */ \
-   std r9,_LINK(r1);  \
-   lbz r10,PACAIRQSOFTMASK(r13);  \
-   mfspr   r11,SPRN_XER;   /* save XER in stackframe   */ \
-   std r10,SOFTE(r1); \
-   std r11,_XER(r1);  \
-   li  r9,(trap)+1;   \
-   std r9,_TRAP(r1);   /* set trap number  */ \
-   li  r10,0; \
-   ld  r11,exception_marker@toc(r2);  \
-   std r10,RESULT(r1); /* clear regs->result   */ \
-   std r11,STACK_FRAME_OVERHEAD-16(r1); /* mark the frame  */
-
 /*
  * On entry r13 points to the paca, r9-r13 are saved in the paca,
  * r9 contains the saved CR, r11 and r12 contain the saved SRR0 and
@@ -452,7 +409,13 @@ END_FTR_SECTION_NESTED(CPU_FTR_CFAR, CPU_FTR_CFAR, 66);
   \
EMIT_BUG_ENTRY 1b,__FILE__,__LINE__,0
 3:
.endif
-   EXCEPTION_PROLOG_COMMON_1()
+
+   std r9,_CCR(r1) /* save CR in stackframe*/
+   std r11,_NIP(r1)/* save SRR0 in stackframe  */
+   std r12,_MSR(r1)/* save SRR1 in stackframe  */
+   std r10,0(r1)   /* make stack chain pointer */
+   std r0,GPR0(r1) /* save r0 in stackframe*/
+   std r10,GPR1(r1)/* save r1 in stackframe*/
 
.if \stack
.if \kaup
@@ -468,7 +431,39 @@ END_FTR_SECTION_NESTED(CPU_FTR_CFAR, CPU_FTR_CFAR, 66);
   \
.endif
.endif
 
-   EXCEPTION_PROLOG_COMMON_2(\area, \vec)
+   /* Save original regs values from save area to stack frame. */
+   ld  r9,\area+EX_R9(r13) /* move r9, r10 to stackframe   */
+   ld  r10,\area+EX_R10(r13)
+   std r9,GPR9(r1)
+   std r10,GPR10(r1)
+   ld  r9,\area+EX_R11(r13)/* move r11 

[PATCH 14/18] powerpc/64s/exception: Expand EXCEPTION_COMMON macro into caller

2019-07-29 Thread Nicholas Piggin
No generated code change except BUG line number constants.

Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/kernel/exceptions-64s.S | 54 ++--
 1 file changed, 27 insertions(+), 27 deletions(-)

diff --git a/arch/powerpc/kernel/exceptions-64s.S 
b/arch/powerpc/kernel/exceptions-64s.S
index 3d5ded748de6..ff949d6139d3 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -437,41 +437,41 @@ END_FTR_SECTION_NESTED(CPU_FTR_CFAR, CPU_FTR_CFAR, 66);   
   \
  * On entry r13 points to the paca, r9-r13 are saved in the paca,
  * r9 contains the saved CR, r11 and r12 contain the saved SRR0 and
  * SRR1, and relocation is on.
+ *
+ * If stack=0, then the stack is already set in r1, and r1 is saved in r10.
+ * PPR save and CPU accounting is not done for the !stack case (XXX why not?)
  */
-#define EXCEPTION_COMMON(area, trap)  \
-   andi.   r10,r12,MSR_PR; /* See if coming from user  */ \
-   mr  r10,r1; /* Save r1  */ \
-   subir1,r1,INT_FRAME_SIZE;   /* alloc frame on kernel stack  */ \
-   beq-1f;\
-   ld  r1,PACAKSAVE(r13);  /* kernel stack to use  */ \
-1: tdgei   r1,-INT_FRAME_SIZE; /* trap if r1 is in userspace   */ \
-   EMIT_BUG_ENTRY 1b,__FILE__,__LINE__,0; \
-3: EXCEPTION_PROLOG_COMMON_1();   \
-   kuap_save_amr_and_lock r9, r10, cr1, cr0;  \
-   beq 4f; /* if from kernel mode  */ \
-   ACCOUNT_CPU_USER_ENTRY(r13, r9, r10);  \
-   SAVE_PPR(area, r9);\
-4: EXCEPTION_PROLOG_COMMON_2(area, trap); \
-   ACCOUNT_STOLEN_TIME
-
-/*
- * Exception where stack is already set in r1, r1 is saved in r10.
- * PPR save and CPU accounting is not done (for some reason).
- */
-#define EXCEPTION_COMMON_STACK(area, trap) \
-   EXCEPTION_PROLOG_COMMON_1();\
-   kuap_save_amr_and_lock r9, r10, cr1;\
-   EXCEPTION_PROLOG_COMMON_2(area, trap)
-
 .macro INT_COMMON vec, area, stack, kaup
.if \stack
-   EXCEPTION_COMMON(\area, \vec)
-   .else
+   andi.   r10,r12,MSR_PR  /* See if coming from user  */
+   mr  r10,r1  /* Save r1  */
+   subir1,r1,INT_FRAME_SIZE/* alloc frame on kernel stack  */
+   beq-1f
+   ld  r1,PACAKSAVE(r13)   /* kernel stack to use  */
+1: tdgei   r1,-INT_FRAME_SIZE  /* trap if r1 is in userspace   */
+   EMIT_BUG_ENTRY 1b,__FILE__,__LINE__,0
+3:
+   .endif
EXCEPTION_PROLOG_COMMON_1()
+
+   .if \stack
+   .if \kaup
+   kuap_save_amr_and_lock r9, r10, cr1, cr0
+   .endif
+   beq 4f  /* if from kernel mode  */
+   ACCOUNT_CPU_USER_ENTRY(r13, r9, r10)
+   SAVE_PPR(\area, r9)
+4:
+   .else
.if \kaup
kuap_save_amr_and_lock r9, r10, cr1
.endif
+   .endif
+
EXCEPTION_PROLOG_COMMON_2(\area, \vec)
+
+   .if \stack
+   ACCOUNT_STOLEN_TIME
.endif
 .endm
 
-- 
2.22.0



[PATCH 13/18] powerpc/64s/exception: Add INT_COMMON gas macro to generate common exception code

2019-07-29 Thread Nicholas Piggin
No generated code change except BUG line number constants.

Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/kernel/exceptions-64s.S | 52 +---
 1 file changed, 32 insertions(+), 20 deletions(-)

diff --git a/arch/powerpc/kernel/exceptions-64s.S 
b/arch/powerpc/kernel/exceptions-64s.S
index 565b9c18aa0c..3d5ded748de6 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -463,6 +463,18 @@ END_FTR_SECTION_NESTED(CPU_FTR_CFAR, CPU_FTR_CFAR, 66);
   \
kuap_save_amr_and_lock r9, r10, cr1;\
EXCEPTION_PROLOG_COMMON_2(area, trap)
 
+.macro INT_COMMON vec, area, stack, kaup
+   .if \stack
+   EXCEPTION_COMMON(\area, \vec)
+   .else
+   EXCEPTION_PROLOG_COMMON_1()
+   .if \kaup
+   kuap_save_amr_and_lock r9, r10, cr1
+   .endif
+   EXCEPTION_PROLOG_COMMON_2(\area, \vec)
+   .endif
+.endm
+
 /*
  * Restore all registers including H/SRR0/1 saved in a stack frame of a
  * standard exception.
@@ -658,7 +670,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_CAN_NAP)
 
 #define EXC_COMMON(name, realvec, hdlr)
\
EXC_COMMON_BEGIN(name); \
-   EXCEPTION_COMMON(PACA_EXGEN, realvec);  \
+   INT_COMMON realvec, PACA_EXGEN, 1, 1 ;  \
bl  save_nvgprs;\
RECONCILE_IRQ_STATE(r10, r11);  \
addir3,r1,STACK_FRAME_OVERHEAD; \
@@ -671,7 +683,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_CAN_NAP)
  */
 #define EXC_COMMON_ASYNC(name, realvec, hdlr)  \
EXC_COMMON_BEGIN(name); \
-   EXCEPTION_COMMON(PACA_EXGEN, realvec);  \
+   INT_COMMON realvec, PACA_EXGEN, 1, 1 ;  \
FINISH_NAP; \
RECONCILE_IRQ_STATE(r10, r11);  \
RUNLATCH_ON;\
@@ -852,7 +864,7 @@ EXC_COMMON_BEGIN(system_reset_common)
mr  r10,r1
ld  r1,PACA_NMI_EMERG_SP(r13)
subir1,r1,INT_FRAME_SIZE
-   EXCEPTION_COMMON_STACK(PACA_EXNMI, 0x100)
+   INT_COMMON 0x100, PACA_EXNMI, 0, 1
bl  save_nvgprs
/*
 * Set IRQS_ALL_DISABLED unconditionally so arch_irqs_disabled does
@@ -962,9 +974,8 @@ EXC_COMMON_BEGIN(machine_check_early_common)
bgt cr1,unrecoverable_mce   /* Check if we hit limit of 4 */
subir1,r1,INT_FRAME_SIZE/* alloc stack frame */
 
-   EXCEPTION_PROLOG_COMMON_1()
/* We don't touch AMR here, we never go to virtual mode */
-   EXCEPTION_PROLOG_COMMON_2(PACA_EXMC, 0x200)
+   INT_COMMON 0x200, PACA_EXMC, 0, 0
 
ld  r3,PACA_EXMC+EX_DAR(r13)
lwz r4,PACA_EXMC+EX_DSISR(r13)
@@ -1064,7 +1075,7 @@ EXC_COMMON_BEGIN(machine_check_common)
 * Machine check is different because we use a different
 * save area: PACA_EXMC instead of PACA_EXGEN.
 */
-   EXCEPTION_COMMON(PACA_EXMC, 0x200)
+   INT_COMMON 0x200, PACA_EXMC, 1, 1
FINISH_NAP
RECONCILE_IRQ_STATE(r10, r11)
ld  r3,PACA_EXMC+EX_DAR(r13)
@@ -1156,7 +1167,7 @@ EXC_COMMON_BEGIN(data_access_common)
 * r9 - r13 are saved in paca->exgen.
 * EX_DAR and EX_DSISR have saved DAR/DSISR
 */
-   EXCEPTION_COMMON(PACA_EXGEN, 0x300)
+   INT_COMMON 0x300, PACA_EXGEN, 1, 1
RECONCILE_IRQ_STATE(r10, r11)
ld  r12,_MSR(r1)
ld  r3,PACA_EXGEN+EX_DAR(r13)
@@ -1179,7 +1190,7 @@ EXC_VIRT_BEGIN(data_access_slb, 0x4380, 0x80)
 EXC_VIRT_END(data_access_slb, 0x4380, 0x80)
 INT_KVM_HANDLER 0x380, EXC_STD, PACA_EXSLB, 1
 EXC_COMMON_BEGIN(data_access_slb_common)
-   EXCEPTION_COMMON(PACA_EXSLB, 0x380)
+   INT_COMMON 0x380, PACA_EXSLB, 1, 1
ld  r4,PACA_EXSLB+EX_DAR(r13)
std r4,_DAR(r1)
addir3,r1,STACK_FRAME_OVERHEAD
@@ -1212,7 +1223,7 @@ EXC_VIRT_BEGIN(instruction_access, 0x4400, 0x80)
 EXC_VIRT_END(instruction_access, 0x4400, 0x80)
 INT_KVM_HANDLER 0x400, EXC_STD, PACA_EXGEN, 0
 EXC_COMMON_BEGIN(instruction_access_common)
-   EXCEPTION_COMMON(PACA_EXGEN, 0x400)
+   INT_COMMON 0x400, PACA_EXGEN, 1, 1
RECONCILE_IRQ_STATE(r10, r11)
ld  r12,_MSR(r1)
ld  r3,_NIP(r1)
@@ -1235,7 +1246,7 @@ EXC_VIRT_BEGIN(instruction_access_slb, 0x4480, 0x80)
 EXC_VIRT_END(instruction_access_slb, 0x4480, 0x80)
 INT_KVM_HANDLER 0x480, EXC_STD, PACA_EXSLB, 0
 EXC_COMMON_BEGIN(instruction_access_slb_common)
-   EXCEPTION_COMMON(PACA_EXSLB, 0x480)
+   INT_COMMON 0x480, PACA_EXSLB, 1, 1
ld  r4,_NIP(r1)
addir3,r1,STACK_FRAME_OVERHEAD
 

[PATCH 12/18] powerpc/64s/exception: Merge EXCEPTION_PROLOG_COMMON_2/3

2019-07-29 Thread Nicholas Piggin
Merge EXCEPTION_PROLOG_COMMON_3 into EXCEPTION_PROLOG_COMMON_2.

No generated code change except BUG line number constants.

Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/kernel/exceptions-64s.S | 18 ++
 1 file changed, 6 insertions(+), 12 deletions(-)

diff --git a/arch/powerpc/kernel/exceptions-64s.S 
b/arch/powerpc/kernel/exceptions-64s.S
index e46d27be06fe..565b9c18aa0c 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -399,7 +399,7 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948)
std r10,GPR1(r1);   /* save r1 in stackframe*/ \
 
 /* Save original regs values from save area to stack frame. */
-#define EXCEPTION_PROLOG_COMMON_2(area)
   \
+#define EXCEPTION_PROLOG_COMMON_2(area, trap) \
ld  r9,area+EX_R9(r13); /* move r9, r10 to stackframe   */ \
ld  r10,area+EX_R10(r13);  \
std r9,GPR9(r1);   \
@@ -415,9 +415,7 @@ BEGIN_FTR_SECTION_NESTED(66);   
   \
std r10,ORIG_GPR3(r1); \
 END_FTR_SECTION_NESTED(CPU_FTR_CFAR, CPU_FTR_CFAR, 66);
   \
GET_CTR(r10, area);\
-   std r10,_CTR(r1);
-
-#define EXCEPTION_PROLOG_COMMON_3(trap)
   \
+   std r10,_CTR(r1);  \
std r2,GPR2(r1);/* save r2 in stackframe*/ \
SAVE_4GPRS(3, r1);  /* save r3 - r6 in stackframe   */ \
SAVE_2GPRS(7, r1);  /* save r7, r8 in stackframe*/ \
@@ -453,8 +451,7 @@ END_FTR_SECTION_NESTED(CPU_FTR_CFAR, CPU_FTR_CFAR, 66); 
   \
beq 4f; /* if from kernel mode  */ \
ACCOUNT_CPU_USER_ENTRY(r13, r9, r10);  \
SAVE_PPR(area, r9);\
-4: EXCEPTION_PROLOG_COMMON_2(area);   \
-   EXCEPTION_PROLOG_COMMON_3(trap);   \
+4: EXCEPTION_PROLOG_COMMON_2(area, trap); \
ACCOUNT_STOLEN_TIME
 
 /*
@@ -464,8 +461,7 @@ END_FTR_SECTION_NESTED(CPU_FTR_CFAR, CPU_FTR_CFAR, 66); 
   \
 #define EXCEPTION_COMMON_STACK(area, trap) \
EXCEPTION_PROLOG_COMMON_1();\
kuap_save_amr_and_lock r9, r10, cr1;\
-   EXCEPTION_PROLOG_COMMON_2(area);\
-   EXCEPTION_PROLOG_COMMON_3(trap)
+   EXCEPTION_PROLOG_COMMON_2(area, trap)
 
 /*
  * Restore all registers including H/SRR0/1 saved in a stack frame of a
@@ -968,8 +964,7 @@ EXC_COMMON_BEGIN(machine_check_early_common)
 
EXCEPTION_PROLOG_COMMON_1()
/* We don't touch AMR here, we never go to virtual mode */
-   EXCEPTION_PROLOG_COMMON_2(PACA_EXMC)
-   EXCEPTION_PROLOG_COMMON_3(0x200)
+   EXCEPTION_PROLOG_COMMON_2(PACA_EXMC, 0x200)
 
ld  r3,PACA_EXMC+EX_DAR(r13)
lwz r4,PACA_EXMC+EX_DSISR(r13)
@@ -1617,8 +1612,7 @@ EXC_COMMON_BEGIN(hmi_exception_early_common)
subir1,r1,INT_FRAME_SIZE/* alloc stack frame*/
EXCEPTION_PROLOG_COMMON_1()
/* We don't touch AMR here, we never go to virtual mode */
-   EXCEPTION_PROLOG_COMMON_2(PACA_EXGEN)
-   EXCEPTION_PROLOG_COMMON_3(0xe60)
+   EXCEPTION_PROLOG_COMMON_2(PACA_EXGEN, 0xe60)
addir3,r1,STACK_FRAME_OVERHEAD
bl  hmi_exception_realmode
cmpdi   cr0,r3,0
-- 
2.22.0



[PATCH 11/18] powerpc/64s/exception: KVM_HANDLER reorder arguments to match other macros

2019-07-29 Thread Nicholas Piggin
Also change argument name (n -> vec) to match others.

No generated code change.

Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/kernel/exceptions-64s.S | 16 
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/kernel/exceptions-64s.S 
b/arch/powerpc/kernel/exceptions-64s.S
index 4b2d4c8f8831..e46d27be06fe 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -316,7 +316,7 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
.endif
 .endm
 
-.macro KVM_HANDLER area, hsrr, n, skip
+.macro KVM_HANDLER vec, hsrr, area, skip
.if \skip
cmpwi   r10,KVM_GUEST_MODE_SKIP
beq 89f
@@ -337,14 +337,14 @@ 
END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948)
/* HSRR variants have the 0x2 bit added to their trap number */
.if \hsrr == EXC_HV_OR_STD
BEGIN_FTR_SECTION
-   ori r12,r12,(\n + 0x2)
+   ori r12,r12,(\vec + 0x2)
FTR_SECTION_ELSE
-   ori r12,r12,(\n)
+   ori r12,r12,(\vec)
ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206)
.elseif \hsrr
-   ori r12,r12,(\n + 0x2)
+   ori r12,r12,(\vec + 0x2)
.else
-   ori r12,r12,(\n)
+   ori r12,r12,(\vec)
.endif
 
 #ifdef CONFIG_RELOCATABLE
@@ -386,7 +386,7 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948)
 #else
 .macro KVMTEST hsrr, n
 .endm
-.macro KVM_HANDLER area, hsrr, n, skip
+.macro KVM_HANDLER vec, hsrr, area, skip
 .endm
 #endif
 
@@ -657,7 +657,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_CAN_NAP)
.else
TRAMP_KVM_BEGIN(do_kvm_\vec\())
.endif
-   KVM_HANDLER \area, \hsrr, \vec, \skip
+   KVM_HANDLER \vec, \hsrr, \area, \skip
 .endm
 
 #define EXC_COMMON(name, realvec, hdlr)
\
@@ -1539,7 +1539,7 @@ TRAMP_KVM_BEGIN(do_kvm_0xc00)
SET_SCRATCH0(r10)
std r9,PACA_EXGEN+EX_R9(r13)
mfcrr9
-   KVM_HANDLER PACA_EXGEN, EXC_STD, 0xc00, 0
+   KVM_HANDLER 0xc00, EXC_STD, PACA_EXGEN, 0
 #endif
 
 
-- 
2.22.0



[PATCH 10/18] powerpc/64s/exception: Add INT_KVM_HANDLER gas macro

2019-07-29 Thread Nicholas Piggin
Replace the 4 variants of cpp macros with one gas macro.

No generated code change except BUG line number constants.

Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/kernel/exceptions-64s.S | 99 +++-
 1 file changed, 40 insertions(+), 59 deletions(-)

diff --git a/arch/powerpc/kernel/exceptions-64s.S 
b/arch/powerpc/kernel/exceptions-64s.S
index e5122ace5f05..4b2d4c8f8831 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -651,22 +651,14 @@ END_FTR_SECTION_IFSET(CPU_FTR_CAN_NAP)
.endif
 .endm
 
-
-#define TRAMP_KVM(area, n) \
-   TRAMP_KVM_BEGIN(do_kvm_##n);\
-   KVM_HANDLER area, EXC_STD, n, 0
-
-#define TRAMP_KVM_SKIP(area, n)
\
-   TRAMP_KVM_BEGIN(do_kvm_##n);\
-   KVM_HANDLER area, EXC_STD, n, 1
-
-#define TRAMP_KVM_HV(area, n)  \
-   TRAMP_KVM_BEGIN(do_kvm_H##n);   \
-   KVM_HANDLER area, EXC_HV, n, 0
-
-#define TRAMP_KVM_HV_SKIP(area, n) \
-   TRAMP_KVM_BEGIN(do_kvm_H##n);   \
-   KVM_HANDLER area, EXC_HV, n, 1
+.macro INT_KVM_HANDLER vec, hsrr, area, skip
+   .if \hsrr
+   TRAMP_KVM_BEGIN(do_kvm_H\vec\())
+   .else
+   TRAMP_KVM_BEGIN(do_kvm_\vec\())
+   .endif
+   KVM_HANDLER \area, \hsrr, \vec, \skip
+.endm
 
 #define EXC_COMMON(name, realvec, hdlr)
\
EXC_COMMON_BEGIN(name); \
@@ -827,9 +819,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206)
 * be dangerous anyway.
 */
 EXC_REAL_END(system_reset, 0x100, 0x100)
-
 EXC_VIRT_NONE(0x4100, 0x100)
-TRAMP_KVM(PACA_EXNMI, 0x100)
+INT_KVM_HANDLER 0x100, EXC_STD, PACA_EXNMI, 0
 
 #ifdef CONFIG_PPC_P7_NAP
 TRAMP_REAL_BEGIN(system_reset_idle_wake)
@@ -923,7 +914,7 @@ TRAMP_REAL_BEGIN(machine_check_fwnmi)
INT_HANDLER machine_check, 0x200, 0, 1, 0, EXC_STD, PACA_EXMC, 0, 1, 1, 
0, 0
 #endif
 
-TRAMP_KVM_SKIP(PACA_EXMC, 0x200)
+INT_KVM_HANDLER 0x200, EXC_STD, PACA_EXMC, 1
 
 #define MACHINE_CHECK_HANDLER_WINDUP   \
/* Clear MSR_RI before setting SRR0 and SRR1. */\
@@ -1162,9 +1153,7 @@ EXC_REAL_END(data_access, 0x300, 0x80)
 EXC_VIRT_BEGIN(data_access, 0x4300, 0x80)
INT_HANDLER data_access, 0x300, 0, 0, 1, EXC_STD, PACA_EXGEN, 1, 1, 1, 
0, 0
 EXC_VIRT_END(data_access, 0x4300, 0x80)
-
-TRAMP_KVM_SKIP(PACA_EXGEN, 0x300)
-
+INT_KVM_HANDLER 0x300, EXC_STD, PACA_EXGEN, 1
 EXC_COMMON_BEGIN(data_access_common)
/*
 * Here r13 points to the paca, r9 contains the saved CR,
@@ -1193,9 +1182,7 @@ EXC_REAL_END(data_access_slb, 0x380, 0x80)
 EXC_VIRT_BEGIN(data_access_slb, 0x4380, 0x80)
INT_HANDLER data_access_slb, 0x380, 0, 0, 1, 0, PACA_EXSLB, 1, 1, 0, 0, 0
 EXC_VIRT_END(data_access_slb, 0x4380, 0x80)
-
-TRAMP_KVM_SKIP(PACA_EXSLB, 0x380)
-
+INT_KVM_HANDLER 0x380, EXC_STD, PACA_EXSLB, 1
 EXC_COMMON_BEGIN(data_access_slb_common)
EXCEPTION_COMMON(PACA_EXSLB, 0x380)
ld  r4,PACA_EXSLB+EX_DAR(r13)
@@ -1228,9 +1215,7 @@ EXC_REAL_END(instruction_access, 0x400, 0x80)
 EXC_VIRT_BEGIN(instruction_access, 0x4400, 0x80)
INT_HANDLER instruction_access, 0x400, 0, 0, 1, EXC_STD, PACA_EXGEN, 1, 
0, 0, 0, 0
 EXC_VIRT_END(instruction_access, 0x4400, 0x80)
-
-TRAMP_KVM(PACA_EXGEN, 0x400)
-
+INT_KVM_HANDLER 0x400, EXC_STD, PACA_EXGEN, 0
 EXC_COMMON_BEGIN(instruction_access_common)
EXCEPTION_COMMON(PACA_EXGEN, 0x400)
RECONCILE_IRQ_STATE(r10, r11)
@@ -1253,8 +1238,7 @@ EXC_REAL_END(instruction_access_slb, 0x480, 0x80)
 EXC_VIRT_BEGIN(instruction_access_slb, 0x4480, 0x80)
INT_HANDLER instruction_access_slb, 0x480, 0, 0, 1, EXC_STD, 
PACA_EXSLB, 1, 0, 0, 0, 0
 EXC_VIRT_END(instruction_access_slb, 0x4480, 0x80)
-TRAMP_KVM(PACA_EXSLB, 0x480)
-
+INT_KVM_HANDLER 0x480, EXC_STD, PACA_EXSLB, 0
 EXC_COMMON_BEGIN(instruction_access_slb_common)
EXCEPTION_COMMON(PACA_EXSLB, 0x480)
ld  r4,_NIP(r1)
@@ -1285,9 +1269,8 @@ EXC_REAL_END(hardware_interrupt, 0x500, 0x100)
 EXC_VIRT_BEGIN(hardware_interrupt, 0x4500, 0x100)
INT_HANDLER hardware_interrupt, 0x500, 0, 0, 1, EXC_HV_OR_STD, 
PACA_EXGEN, 1, 0, 0, IRQS_DISABLED, 1
 EXC_VIRT_END(hardware_interrupt, 0x4500, 0x100)
-
-TRAMP_KVM(PACA_EXGEN, 0x500)
-TRAMP_KVM_HV(PACA_EXGEN, 0x500)
+INT_KVM_HANDLER 0x500, EXC_STD, PACA_EXGEN, 0
+INT_KVM_HANDLER 0x500, EXC_HV, PACA_EXGEN, 0
 EXC_COMMON_ASYNC(hardware_interrupt_common, 0x500, do_IRQ)
 
 
@@ -1297,8 +1280,7 @@ EXC_REAL_END(alignment, 0x600, 0x100)
 EXC_VIRT_BEGIN(alignment, 0x4600, 0x100)
INT_HANDLER alignment, 0x600, 0, 0, 1, EXC_STD, PACA_EXGEN, 1, 1, 1, 0, 0
 EXC_VIRT_END(alignment, 0x4600, 0x100)
-

[PATCH 09/18] powerpc/64s/exception: INT_HANDLER support HDAR/HDSISR and use it in HDSI

2019-07-29 Thread Nicholas Piggin
Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/kernel/exceptions-64s.S | 16 ++--
 1 file changed, 10 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/kernel/exceptions-64s.S 
b/arch/powerpc/kernel/exceptions-64s.S
index 1c07b5fc6692..e5122ace5f05 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -620,11 +620,19 @@ END_FTR_SECTION_IFSET(CPU_FTR_CAN_NAP)
GET_SCRATCH0(r10)
std r10,\area\()+EX_R13(r13)
.if \dar
+   .if \hsrr
+   mfspr   r10,SPRN_HDAR
+   .else
mfspr   r10,SPRN_DAR
+   .endif
std r10,\area\()+EX_DAR(r13)
.endif
.if \dsisr
+   .if \hsrr
+   mfspr   r10,SPRN_HDSISR
+   .else
mfspr   r10,SPRN_DSISR
+   .endif
stw r10,\area\()+EX_DSISR(r13)
.endif
 
@@ -1564,17 +1572,13 @@ EXC_COMMON(single_step_common, 0xd00, 
single_step_exception)
 
 
 EXC_REAL_BEGIN(h_data_storage, 0xe00, 0x20)
-   INT_HANDLER h_data_storage, 0xe00, 1, 0, 0, EXC_HV, PACA_EXGEN, 1, 0, 
0, 0, 1
+   INT_HANDLER h_data_storage, 0xe00, 1, 0, 0, EXC_HV, PACA_EXGEN, 1, 1, 
1, 0, 1
 EXC_REAL_END(h_data_storage, 0xe00, 0x20)
 EXC_VIRT_BEGIN(h_data_storage, 0x4e00, 0x20)
-   INT_HANDLER h_data_storage, 0xe00, 1, 0, 1, EXC_HV, PACA_EXGEN, 1, 0, 
0, 0, 1
+   INT_HANDLER h_data_storage, 0xe00, 1, 0, 1, EXC_HV, PACA_EXGEN, 1, 1, 
1, 0, 1
 EXC_VIRT_END(h_data_storage, 0x4e00, 0x20)
 TRAMP_KVM_HV_SKIP(PACA_EXGEN, 0xe00)
 EXC_COMMON_BEGIN(h_data_storage_common)
-   mfspr   r10,SPRN_HDAR
-   std r10,PACA_EXGEN+EX_DAR(r13)
-   mfspr   r10,SPRN_HDSISR
-   stw r10,PACA_EXGEN+EX_DSISR(r13)
EXCEPTION_COMMON(PACA_EXGEN, 0xe00)
bl  save_nvgprs
RECONCILE_IRQ_STATE(r10, r11)
-- 
2.22.0



[PATCH 08/18] powerpc/64s/exception: Add the virt variant of the denorm interrupt handler

2019-07-29 Thread Nicholas Piggin
All other virt handlers have the prolog code in the virt vector rather
than branch to the real vector. Follow this pattern in the denorm virt
handler.

Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/kernel/exceptions-64s.S | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/exceptions-64s.S 
b/arch/powerpc/kernel/exceptions-64s.S
index 94f885c58022..1c07b5fc6692 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -1853,7 +1853,11 @@ EXC_REAL_END(denorm_exception_hv, 0x1500, 0x100)
 
 #ifdef CONFIG_PPC_DENORMALISATION
 EXC_VIRT_BEGIN(denorm_exception, 0x5500, 0x100)
-   b   exc_real_0x1500_denorm_exception_hv
+   INT_HANDLER denorm_exception, 0x1500, 0, 2, 1, EXC_HV, PACA_EXGEN, 1, 
0, 0, 0, 0
+   mfspr   r10,SPRN_HSRR1
+   andis.  r10,r10,(HSRR1_DENORM)@h /* denorm? */
+   bne+denorm_assist
+   INT_VIRT_SAVE_SRR_AND_JUMP denorm_common, EXC_HV
 EXC_VIRT_END(denorm_exception, 0x5500, 0x100)
 #else
 EXC_VIRT_NONE(0x5500, 0x100)
-- 
2.22.0



[PATCH 07/18] powerpc/64s/exception: remove EXCEPTION_PROLOG_0/1, rename _2

2019-07-29 Thread Nicholas Piggin
EXCEPTION_PROLOG_0 and _1 have only a single caller, so expand them
into it.

Rename EXCEPTION_PROLOG_2_REAL to INT_SAVE_SRR_AND_JUMP and
EXCEPTION_PROLOG_2_VIRT to INT_VIRT_SAVE_SRR_AND_JUMP, which are
more descriptive.

No generated code change except BUG line number constants.

Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/kernel/exceptions-64s.S | 151 +--
 1 file changed, 73 insertions(+), 78 deletions(-)

diff --git a/arch/powerpc/kernel/exceptions-64s.S 
b/arch/powerpc/kernel/exceptions-64s.S
index b07de1106d9e..94f885c58022 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -180,77 +180,7 @@ BEGIN_FTR_SECTION_NESTED(943)  
\
std ra,offset(r13); \
 END_FTR_SECTION_NESTED(ftr,ftr,943)
 
-.macro EXCEPTION_PROLOG_0 area
-   SET_SCRATCH0(r13)   /* save r13 */
-   GET_PACA(r13)
-   std r9,\area\()+EX_R9(r13)  /* save r9 */
-   OPT_GET_SPR(r9, SPRN_PPR, CPU_FTR_HAS_PPR)
-   HMT_MEDIUM
-   std r10,\area\()+EX_R10(r13)/* save r10 - r12 */
-   OPT_GET_SPR(r10, SPRN_CFAR, CPU_FTR_CFAR)
-.endm
-
-.macro EXCEPTION_PROLOG_1 hsrr, area, kvm, vec, dar, dsisr, bitmask
-   OPT_SAVE_REG_TO_PACA(\area\()+EX_PPR, r9, CPU_FTR_HAS_PPR)
-   OPT_SAVE_REG_TO_PACA(\area\()+EX_CFAR, r10, CPU_FTR_CFAR)
-   INTERRUPT_TO_KERNEL
-   SAVE_CTR(r10, \area\())
-   mfcrr9
-   .if \kvm
-   KVMTEST \hsrr \vec
-   .endif
-   .if \bitmask
-   lbz r10,PACAIRQSOFTMASK(r13)
-   andi.   r10,r10,\bitmask
-   /* Associate vector numbers with bits in paca->irq_happened */
-   .if \vec == 0x500 || \vec == 0xea0
-   li  r10,PACA_IRQ_EE
-   .elseif \vec == 0x900
-   li  r10,PACA_IRQ_DEC
-   .elseif \vec == 0xa00 || \vec == 0xe80
-   li  r10,PACA_IRQ_DBELL
-   .elseif \vec == 0xe60
-   li  r10,PACA_IRQ_HMI
-   .elseif \vec == 0xf00
-   li  r10,PACA_IRQ_PMI
-   .else
-   .abort "Bad maskable vector"
-   .endif
-
-   .if \hsrr == EXC_HV_OR_STD
-   BEGIN_FTR_SECTION
-   bne masked_Hinterrupt
-   FTR_SECTION_ELSE
-   bne masked_interrupt
-   ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206)
-   .elseif \hsrr
-   bne masked_Hinterrupt
-   .else
-   bne masked_interrupt
-   .endif
-   .endif
-
-   std r11,\area\()+EX_R11(r13)
-   std r12,\area\()+EX_R12(r13)
-
-   /*
-* DAR/DSISR, SCRATCH0 must be read before setting MSR[RI],
-* because a d-side MCE will clobber those registers so is
-* not recoverable if they are live.
-*/
-   GET_SCRATCH0(r10)
-   std r10,\area\()+EX_R13(r13)
-   .if \dar
-   mfspr   r10,SPRN_DAR
-   std r10,\area\()+EX_DAR(r13)
-   .endif
-   .if \dsisr
-   mfspr   r10,SPRN_DSISR
-   stw r10,\area\()+EX_DSISR(r13)
-   .endif
-.endm
-
-.macro EXCEPTION_PROLOG_2_REAL label, hsrr, set_ri
+.macro INT_SAVE_SRR_AND_JUMP label, hsrr, set_ri
ld  r10,PACAKMSR(r13)   /* get MSR value for kernel */
.if ! \set_ri
xorir10,r10,MSR_RI  /* Clear MSR_RI */
@@ -293,7 +223,8 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
b   .   /* prevent speculative execution */
 .endm
 
-.macro EXCEPTION_PROLOG_2_VIRT label, hsrr
+/* INT_SAVE_SRR_AND_JUMP works for real or virt, this is faster but virt only 
*/
+.macro INT_VIRT_SAVE_SRR_AND_JUMP label, hsrr
 #ifdef CONFIG_RELOCATABLE
.if \hsrr == EXC_HV_OR_STD
BEGIN_FTR_SECTION
@@ -620,7 +551,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_CAN_NAP)
  *   This is done if early=2.
  */
 .macro INT_HANDLER name, vec, ool, early, virt, hsrr, area, ri, dar, dsisr, 
bitmask, kvm
-   EXCEPTION_PROLOG_0 \area
+   SET_SCRATCH0(r13)   /* save r13 */
+   GET_PACA(r13)
+   std r9,\area\()+EX_R9(r13)  /* save r9 */
+   OPT_GET_SPR(r9, SPRN_PPR, CPU_FTR_HAS_PPR)
+   HMT_MEDIUM
+   std r10,\area\()+EX_R10(r13)/* save r10 - r12 */
+   OPT_GET_SPR(r10, SPRN_CFAR, CPU_FTR_CFAR)
.if \ool
.if !\virt
b   tramp_real_\name
@@ -632,16 +569,74 @@ END_FTR_SECTION_IFSET(CPU_FTR_CAN_NAP)
TRAMP_VIRT_BEGIN(tramp_virt_\name)
.endif
.endif
-   EXCEPTION_PROLOG_1 \hsrr, \area, \kvm, \vec, \dar, \dsisr, \bitmask
+
+   OPT_SAVE_REG_TO_PACA(\area\()+EX_PPR, r9, CPU_FTR_HAS_PPR)
+   OPT_SAVE_REG_TO_PACA(\area\()+EX_CFAR, r10, CPU_FTR_CFAR)
+   INTERRUPT_TO_KERNEL
+   SAVE_CTR(r10, \area\())
+  

[PATCH 06/18] powerpc/64s/exception: Replace PROLOG macros and EXC helpers with a gas macro

2019-07-29 Thread Nicholas Piggin
This creates a single macro that generates the exception prolog code,
with variants specified by arguments, rather than assorted nested
macros for different variants.

The increasing length of macro argument list is not nice to read or
modify, but this is a temporary condition that will be improved in
later changes.

No generated code change except BUG line number constants and label
names.

Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/kernel/exceptions-64s.S | 502 +++
 1 file changed, 206 insertions(+), 296 deletions(-)

diff --git a/arch/powerpc/kernel/exceptions-64s.S 
b/arch/powerpc/kernel/exceptions-64s.S
index 9c407392774c..b07de1106d9e 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -43,6 +43,17 @@
 .endif
 #endif
 
+/*
+ * Following are fixed section helper macros.
+ *
+ * EXC_REAL_BEGIN/END  - real, unrelocated exception vectors
+ * EXC_VIRT_BEGIN/END  - virt (AIL), unrelocated exception vectors
+ * TRAMP_REAL_BEGIN- real, unrelocated helpers (virt may call these)
+ * TRAMP_VIRT_BEGIN- virt, unreloc helpers (in practice, real can use)
+ * TRAMP_KVM_BEGIN - KVM handlers, these are put into real, unrelocated
+ * EXC_COMMON  - After switching to virtual, relocated mode.
+ */
+
 #define EXC_REAL_BEGIN(name, start, size)  \
FIXED_SECTION_ENTRY_BEGIN_LOCATION(real_vectors, 
exc_real_##start##_##name, start, size)
 
@@ -589,196 +600,54 @@ END_FTR_SECTION_IFSET(CPU_FTR_CAN_NAP)
 #endif
 
 /*
- * Following are the BOOK3S exception handler helper macros.
- * Handlers come in a number of types, and each type has a number of varieties.
- *
- * EXC_REAL_* - real, unrelocated exception vectors
- * EXC_VIRT_* - virt (AIL), unrelocated exception vectors
- * TRAMP_REAL_*   - real, unrelocated helpers (virt can call these)
- * TRAMP_VIRT_*   - virt, unreloc helpers (in practice, real can use)
- * TRAMP_KVM  - KVM handlers that get put into real, unrelocated
- * EXC_COMMON - virt, relocated common handlers
- *
- * The EXC handlers are given a name, and branch to name_common, or the
- * appropriate KVM or masking function. Vector handler verieties are as
- * follows:
- *
- * EXC_{REAL|VIRT}_BEGIN/END - used to open-code the exception
- *
- * EXC_{REAL|VIRT}  - standard exception
- *
- * EXC_{REAL|VIRT}_suffix
- * where _suffix is:
- *   - _MASKABLE   - maskable exception
- *   - _OOL- out of line with trampoline to common handler
- *   - _HV - HV exception
- *
- * There can be combinations, e.g., EXC_VIRT_OOL_MASKABLE_HV
+ * This is the BOOK3S interrupt entry code macro.
  *
- * KVM handlers come in the following verieties:
- * TRAMP_KVM
- * TRAMP_KVM_SKIP
- * TRAMP_KVM_HV
- * TRAMP_KVM_HV_SKIP
- *
- * COMMON handlers come in the following verieties:
- * EXC_COMMON_BEGIN/END - used to open-code the handler
- * EXC_COMMON
- * EXC_COMMON_ASYNC
- *
- * TRAMP_REAL and TRAMP_VIRT can be used with BEGIN/END. KVM
- * and OOL handlers are implemented as types of TRAMP and TRAMP_VIRT handlers.
+ * This can result in one of several things happening:
+ * - Branch to the _common handler, relocated, in virtual mode.
+ *   These are normal interrupts (synchronous and asynchronous) handled by
+ *   the kernel.
+ * - Branch to KVM, relocated but real mode interrupts remain in real mode.
+ *   These occur when HSTATE_IN_GUEST is set. The interrupt may be caused by
+ *   / intended for host or guest kernel, but KVM must always be involved
+ *   because the machine state is set for guest execution.
+ * - Branch to the masked handler, unrelocated.
+ *   These occur when maskable asynchronous interrupts are taken with the
+ *   irq_soft_mask set.
+ * - Branch to an "early" handler in real mode but relocated.
+ *   This is done if early=1. MCE and HMI use these to handle errors in real
+ *   mode.
+ * - Fall through and continue executing in real, unrelocated mode.
+ *   This is done if early=2.
  */
+.macro INT_HANDLER name, vec, ool, early, virt, hsrr, area, ri, dar, dsisr, 
bitmask, kvm
+   EXCEPTION_PROLOG_0 \area
+   .if \ool
+   .if !\virt
+   b   tramp_real_\name
+   .pushsection .text
+   TRAMP_REAL_BEGIN(tramp_real_\name)
+   .else
+   b   tramp_virt_\name
+   .pushsection .text
+   TRAMP_VIRT_BEGIN(tramp_virt_\name)
+   .endif
+   .endif
+   EXCEPTION_PROLOG_1 \hsrr, \area, \kvm, \vec, \dar, \dsisr, \bitmask
+   .if \early == 2
+   /* nothing more */
+   .elseif \early
+   mfctr   r10 /* save ctr, even for !RELOCATABLE */
+   BRANCH_TO_C000(r11, \name\()_early_common)
+   .elseif !\virt
+   EXCEPTION_PROLOG_2_REAL \name\()_common, \hsrr, \ri
+   .else
+   EXCEPTION_PROLOG_2_VIRT \name\()_common, \hsrr
+   .endif
+   .if \ool
+   .popsection
+   .endif
+.endm
 
-#define __EXC_REAL(name, start, size, area)  

[PATCH 05/18] powerpc/64s/exception: remove 0xb00 handler

2019-07-29 Thread Nicholas Piggin
This vector is not used by any supported processor, and has been
implemented as an unknown exception going back to 2.6. There is
nothing special about 0xb00, so remove it like other unused
vectors.

Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/kernel/exceptions-64s.S | 6 ++
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/kernel/exceptions-64s.S 
b/arch/powerpc/kernel/exceptions-64s.S
index 723c37f3da17..9c407392774c 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -1563,10 +1563,8 @@ EXC_COMMON_ASYNC(doorbell_super_common, 0xa00, 
unknown_exception)
 #endif
 
 
-EXC_REAL(trap_0b, 0xb00, 0x100)
-EXC_VIRT(trap_0b, 0x4b00, 0x100, 0xb00)
-TRAMP_KVM(PACA_EXGEN, 0xb00)
-EXC_COMMON(trap_0b_common, 0xb00, unknown_exception)
+EXC_REAL_NONE(0xb00, 0x100)
+EXC_VIRT_NONE(0x4b00, 0x100)
 
 /*
  * system call / hypercall (0xc00, 0x4c00)
-- 
2.22.0



[PATCH 04/18] powerpc/64s/exception: Fix performance monitor virt handler

2019-07-29 Thread Nicholas Piggin
The perf virt handler uses EXCEPTION_PROLOG_2_REAL rather than _VIRT.
In practice this is okay because the _REAL variant is usable by virt
mode interrupts, but should be fixed (and is a performance win).

Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/kernel/exceptions-64s.S | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/exceptions-64s.S 
b/arch/powerpc/kernel/exceptions-64s.S
index 60969992e9e0..723c37f3da17 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -750,7 +750,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_CAN_NAP)
 #define __TRAMP_VIRT_OOL_MASKABLE(name, realvec, bitmask)  \
TRAMP_VIRT_BEGIN(tramp_virt_##name);\
EXCEPTION_PROLOG_1 EXC_STD, PACA_EXGEN, 0, realvec, 0, 0, bitmask ; \
-   EXCEPTION_PROLOG_2_REAL name##_common, EXC_STD, 1
+   EXCEPTION_PROLOG_2_VIRT name##_common, EXC_STD
 
 #define EXC_VIRT_OOL_MASKABLE(name, start, size, realvec, bitmask) \
__EXC_VIRT_OOL_MASKABLE(name, start, size); \
-- 
2.22.0



[PATCH 03/18] powerpc/64s/exception: Add EXC_HV_OR_STD, for HSRR if HV=1 else SRR

2019-07-29 Thread Nicholas Piggin
Use EXC_HV_OR_STD to de-duplicate the 0x500 external interrupt.
This helps with consolidation in future.

Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/kernel/exceptions-64s.S | 102 +--
 1 file changed, 79 insertions(+), 23 deletions(-)

diff --git a/arch/powerpc/kernel/exceptions-64s.S 
b/arch/powerpc/kernel/exceptions-64s.S
index 1fb46fb24696..60969992e9e0 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -109,6 +109,7 @@ name:
addis   reg,reg,(ABS_ADDR(label))@h
 
 /* Exception register prefixes */
+#define EXC_HV_OR_STD  2 /* depends on HVMODE */
 #define EXC_HV 1
 #define EXC_STD0
 
@@ -205,7 +206,13 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
.abort "Bad maskable vector"
.endif
 
-   .if \hsrr
+   .if \hsrr == EXC_HV_OR_STD
+   BEGIN_FTR_SECTION
+   bne masked_Hinterrupt
+   FTR_SECTION_ELSE
+   bne masked_interrupt
+   ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206)
+   .elseif \hsrr
bne masked_Hinterrupt
.else
bne masked_interrupt
@@ -237,7 +244,17 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
.if ! \set_ri
xorir10,r10,MSR_RI  /* Clear MSR_RI */
.endif
-   .if \hsrr
+   .if \hsrr == EXC_HV_OR_STD
+   BEGIN_FTR_SECTION
+   mfspr   r11,SPRN_HSRR0  /* save HSRR0 */
+   mfspr   r12,SPRN_HSRR1  /* and HSRR1 */
+   mtspr   SPRN_HSRR1,r10
+   FTR_SECTION_ELSE
+   mfspr   r11,SPRN_SRR0   /* save SRR0 */
+   mfspr   r12,SPRN_SRR1   /* and SRR1 */
+   mtspr   SPRN_SRR1,r10
+   ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206)
+   .elseif \hsrr
mfspr   r11,SPRN_HSRR0  /* save HSRR0 */
mfspr   r12,SPRN_HSRR1  /* and HSRR1 */
mtspr   SPRN_HSRR1,r10
@@ -247,7 +264,15 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
mtspr   SPRN_SRR1,r10
.endif
LOAD_HANDLER(r10, \label\())
-   .if \hsrr
+   .if \hsrr == EXC_HV_OR_STD
+   BEGIN_FTR_SECTION
+   mtspr   SPRN_HSRR0,r10
+   HRFI_TO_KERNEL
+   FTR_SECTION_ELSE
+   mtspr   SPRN_SRR0,r10
+   RFI_TO_KERNEL
+   ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206)
+   .elseif \hsrr
mtspr   SPRN_HSRR0,r10
HRFI_TO_KERNEL
.else
@@ -259,14 +284,26 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
 
 .macro EXCEPTION_PROLOG_2_VIRT label, hsrr
 #ifdef CONFIG_RELOCATABLE
-   .if \hsrr
+   .if \hsrr == EXC_HV_OR_STD
+   BEGIN_FTR_SECTION
+   mfspr   r11,SPRN_HSRR0  /* save HSRR0 */
+   FTR_SECTION_ELSE
+   mfspr   r11,SPRN_SRR0   /* save SRR0 */
+   ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206)
+   .elseif \hsrr
mfspr   r11,SPRN_HSRR0  /* save HSRR0 */
.else
mfspr   r11,SPRN_SRR0   /* save SRR0 */
.endif
LOAD_HANDLER(r12, \label\())
mtctr   r12
-   .if \hsrr
+   .if \hsrr == EXC_HV_OR_STD
+   BEGIN_FTR_SECTION
+   mfspr   r12,SPRN_HSRR1  /* and HSRR1 */
+   FTR_SECTION_ELSE
+   mfspr   r12,SPRN_SRR1   /* and HSRR1 */
+   ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206)
+   .elseif \hsrr
mfspr   r12,SPRN_HSRR1  /* and HSRR1 */
.else
mfspr   r12,SPRN_SRR1   /* and HSRR1 */
@@ -275,7 +312,15 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
mtmsrd  r10,1   /* Set RI (EE=0) */
bctr
 #else
-   .if \hsrr
+   .if \hsrr == EXC_HV_OR_STD
+   BEGIN_FTR_SECTION
+   mfspr   r11,SPRN_HSRR0  /* save HSRR0 */
+   mfspr   r12,SPRN_HSRR1  /* and HSRR1 */
+   FTR_SECTION_ELSE
+   mfspr   r11,SPRN_SRR0   /* save SRR0 */
+   mfspr   r12,SPRN_SRR1   /* and SRR1 */
+   ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206)
+   .elseif \hsrr
mfspr   r11,SPRN_HSRR0  /* save HSRR0 */
mfspr   r12,SPRN_HSRR1  /* and HSRR1 */
.else
@@ -316,7 +361,13 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
 .macro KVMTEST hsrr, n
lbz r10,HSTATE_IN_GUEST(r13)
cmpwi   r10,0
-   .if \hsrr
+   .if \hsrr == EXC_HV_OR_STD
+   BEGIN_FTR_SECTION
+   bne do_kvm_H\n
+   FTR_SECTION_ELSE
+   bne do_kvm_\n
+   ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206)
+   .elseif \hsrr
bne do_kvm_H\n
.else
bne do_kvm_\n
@@ -342,7 +393,13 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948)
std r12,HSTATE_SCRATCH0(r13)
sldir12,r9,32
/* HSRR variants have the 0x2 bit added to their trap number */
-   .if \hsrr
+   .if \hsrr == EXC_HV_OR_STD
+   BEGIN_FTR_SECTION
+   ori

[PATCH 02/18] powerpc/64s/exception: move head-64.h exception code to exception-64s.S

2019-07-29 Thread Nicholas Piggin
The head-64.h code should deal only with the head code sections
and offset calculations.

No generated code change except BUG line number constants.

Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/include/asm/head-64.h   | 41 
 arch/powerpc/kernel/exceptions-64s.S | 41 
 2 files changed, 41 insertions(+), 41 deletions(-)

diff --git a/arch/powerpc/include/asm/head-64.h 
b/arch/powerpc/include/asm/head-64.h
index a466765709a9..2dabcf668292 100644
--- a/arch/powerpc/include/asm/head-64.h
+++ b/arch/powerpc/include/asm/head-64.h
@@ -169,47 +169,6 @@ end_##sname:
 
 #define ABS_ADDR(label) (label - fs_label + fs_start)
 
-#define EXC_REAL_BEGIN(name, start, size)  \
-   FIXED_SECTION_ENTRY_BEGIN_LOCATION(real_vectors, 
exc_real_##start##_##name, start, size)
-
-#define EXC_REAL_END(name, start, size)\
-   FIXED_SECTION_ENTRY_END_LOCATION(real_vectors, 
exc_real_##start##_##name, start, size)
-
-#define EXC_VIRT_BEGIN(name, start, size)  \
-   FIXED_SECTION_ENTRY_BEGIN_LOCATION(virt_vectors, 
exc_virt_##start##_##name, start, size)
-
-#define EXC_VIRT_END(name, start, size)\
-   FIXED_SECTION_ENTRY_END_LOCATION(virt_vectors, 
exc_virt_##start##_##name, start, size)
-
-#define EXC_COMMON_BEGIN(name) \
-   USE_TEXT_SECTION(); \
-   .balign IFETCH_ALIGN_BYTES; \
-   .global name;   \
-   _ASM_NOKPROBE_SYMBOL(name); \
-   DEFINE_FIXED_SYMBOL(name);  \
-name:
-
-#define TRAMP_REAL_BEGIN(name) \
-   FIXED_SECTION_ENTRY_BEGIN(real_trampolines, name)
-
-#define TRAMP_VIRT_BEGIN(name) \
-   FIXED_SECTION_ENTRY_BEGIN(virt_trampolines, name)
-
-#ifdef CONFIG_KVM_BOOK3S_64_HANDLER
-#define TRAMP_KVM_BEGIN(name)  \
-   TRAMP_VIRT_BEGIN(name)
-#else
-#define TRAMP_KVM_BEGIN(name)
-#endif
-
-#define EXC_REAL_NONE(start, size) \
-   FIXED_SECTION_ENTRY_BEGIN_LOCATION(real_vectors, 
exc_real_##start##_##unused, start, size); \
-   FIXED_SECTION_ENTRY_END_LOCATION(real_vectors, 
exc_real_##start##_##unused, start, size)
-
-#define EXC_VIRT_NONE(start, size) \
-   FIXED_SECTION_ENTRY_BEGIN_LOCATION(virt_vectors, 
exc_virt_##start##_##unused, start, size); \
-   FIXED_SECTION_ENTRY_END_LOCATION(virt_vectors, 
exc_virt_##start##_##unused, start, size)
-
 #endif /* __ASSEMBLY__ */
 
 #endif /* _ASM_POWERPC_HEAD_64_H */
diff --git a/arch/powerpc/kernel/exceptions-64s.S 
b/arch/powerpc/kernel/exceptions-64s.S
index f79f811ee131..1fb46fb24696 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -43,6 +43,47 @@
 .endif
 #endif
 
+#define EXC_REAL_BEGIN(name, start, size)  \
+   FIXED_SECTION_ENTRY_BEGIN_LOCATION(real_vectors, 
exc_real_##start##_##name, start, size)
+
+#define EXC_REAL_END(name, start, size)\
+   FIXED_SECTION_ENTRY_END_LOCATION(real_vectors, 
exc_real_##start##_##name, start, size)
+
+#define EXC_VIRT_BEGIN(name, start, size)  \
+   FIXED_SECTION_ENTRY_BEGIN_LOCATION(virt_vectors, 
exc_virt_##start##_##name, start, size)
+
+#define EXC_VIRT_END(name, start, size)\
+   FIXED_SECTION_ENTRY_END_LOCATION(virt_vectors, 
exc_virt_##start##_##name, start, size)
+
+#define EXC_COMMON_BEGIN(name) \
+   USE_TEXT_SECTION(); \
+   .balign IFETCH_ALIGN_BYTES; \
+   .global name;   \
+   _ASM_NOKPROBE_SYMBOL(name); \
+   DEFINE_FIXED_SYMBOL(name);  \
+name:
+
+#define TRAMP_REAL_BEGIN(name) \
+   FIXED_SECTION_ENTRY_BEGIN(real_trampolines, name)
+
+#define TRAMP_VIRT_BEGIN(name) \
+   FIXED_SECTION_ENTRY_BEGIN(virt_trampolines, name)
+
+#ifdef CONFIG_KVM_BOOK3S_64_HANDLER
+#define TRAMP_KVM_BEGIN(name)  \
+   TRAMP_VIRT_BEGIN(name)
+#else
+#define TRAMP_KVM_BEGIN(name)
+#endif
+
+#define EXC_REAL_NONE(start, size) \
+   FIXED_SECTION_ENTRY_BEGIN_LOCATION(real_vectors, 
exc_real_##start##_##unused, start, size); \
+   FIXED_SECTION_ENTRY_END_LOCATION(real_vectors, 
exc_real_##start##_##unused, start, size)
+
+#define EXC_VIRT_NONE(start, size) \
+   FIXED_SECTION_ENTRY_BEGIN_LOCATION(virt_vectors, 
exc_virt_##start##_##unused, 

[PATCH 01/18] powerpc/64s/exception: Fix DAR load for handle_page_fault error case

2019-07-29 Thread Nicholas Piggin
This buglet goes back to before the 64/32 arch merge, but it does not
seem to have had practical consequences because bad_page_fault does
not use the 2nd argument, but rather regs->dar/nip.

Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/kernel/exceptions-64s.S | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/exceptions-64s.S 
b/arch/powerpc/kernel/exceptions-64s.S
index 6b409d62d36c..f79f811ee131 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -2336,7 +2336,7 @@ handle_page_fault:
bl  save_nvgprs
mr  r5,r3
addir3,r1,STACK_FRAME_OVERHEAD
-   lwz r4,_DAR(r1)
+   ld  r4,_DAR(r1)
bl  bad_page_fault
b   ret_from_except
 
-- 
2.22.0



[PATCH 00/18] powerpc/64s/exception: cleanup and gas macroify, round 2

2019-07-29 Thread Nicholas Piggin
This series goes on top of the unmerged machine check handler
changes

https://patchwork.ozlabs.org/project/linuxppc-dev/list/?series=118814

This carries on with the goal of unwinding macros and consolidating
code. This gets most of the way there, but we have traded that
improvement for another problem, which is the argument list for code
generation macros is unwieldy.

  INT_HANDLER system_reset, 0x100, 0, 0, 0, EXC_STD, PACA_EXNMI, 0, 0, 0, 0, 1

There are two possible ways I see to solve this. One is to come up
with new sets of constants for each argument.

  INT_HANDLER system_reset, 0x100, INLINE, FULL, REAL, EXC_STD, PACA_EXNMI, 
CLEAR_RI, NO_DAR, NO_DSISR, NO_MASK, KVM

I don't really like that, my preferred way is we can set symbols to 
configure the behaviour of the code generation macro.

  INT_DEFINE_BEGIN(system_reset)
  IVEC=0x100
  IHSRR=0
  IAREA=PACA_EXNMI
  ISET_RI=0
  IKVM_REAL=1
  INT_DEFINE_END(data_access)

  INT_HANDLER system_reset

Any other suggestions?

Thanks,
Nick

Nicholas Piggin (18):
  powerpc/64s/exception: Fix DAR load for handle_page_fault error case
  powerpc/64s/exception: move head-64.h exception code to
exception-64s.S
  powerpc/64s/exception: Add EXC_HV_OR_STD, for HSRR if HV=1 else SRR
  powerpc/64s/exception: Fix performance monitor virt handler
  powerpc/64s/exception: remove 0xb00 handler
  powerpc/64s/exception: Replace PROLOG macros and EXC helpers with a
gas macro
  powerpc/64s/exception: remove EXCEPTION_PROLOG_0/1, rename _2
  powerpc/64s/exception: Add the virt variant of the denorm interrupt
handler
  powerpc/64s/exception: INT_HANDLER support HDAR/HDSISR and use it in
HDSI
  powerpc/64s/exception: Add INT_KVM_HANDLER gas macro
  powerpc/64s/exception: KVM_HANDLER reorder arguments to match other
macros
  powerpc/64s/exception: Merge EXCEPTION_PROLOG_COMMON_2/3
  powerpc/64s/exception: Add INT_COMMON gas macro to generate common
exception code
  powerpc/64s/exception: Expand EXCEPTION_COMMON macro into caller
  powerpc/64s/exception: Expand EXCEPTION_PROLOG_COMMON_1 and 2 into
caller
  powerpc/64s/exception: INT_COMMON add DIR, DSISR, reconcile options
  powerpc/64s/exception: move interrupt entry code above the common
handler
  powerpc/64s/exception: program check handler do not branch into a
macro

 arch/powerpc/include/asm/head-64.h   |   41 -
 arch/powerpc/kernel/exceptions-64s.S | 1264 +-
 2 files changed, 623 insertions(+), 682 deletions(-)

-- 
2.22.0



Re: [RFC PATCH 10/10] powerpc/fsl_booke/kaslr: dump out kernel offset information on panic

2019-07-29 Thread Jason Yan



On 2019/7/29 19:43, Christophe Leroy wrote:



Le 17/07/2019 à 10:06, Jason Yan a écrit :

When kaslr is enabled, the kernel offset is different for every boot.
This brings some difficult to debug the kernel. Dump out the kernel
offset when panic so that we can easily debug the kernel.

Signed-off-by: Jason Yan 
Cc: Diana Craciun 
Cc: Michael Ellerman 
Cc: Christophe Leroy 
Cc: Benjamin Herrenschmidt 
Cc: Paul Mackerras 
Cc: Nicholas Piggin 
Cc: Kees Cook 
---
  arch/powerpc/include/asm/page.h |  5 +
  arch/powerpc/kernel/machine_kexec.c |  1 +
  arch/powerpc/kernel/setup-common.c  | 23 +++
  3 files changed, 29 insertions(+)

diff --git a/arch/powerpc/include/asm/page.h 
b/arch/powerpc/include/asm/page.h

index 60a68d3a54b1..cd3ac530e58d 100644
--- a/arch/powerpc/include/asm/page.h
+++ b/arch/powerpc/include/asm/page.h
@@ -317,6 +317,11 @@ struct vm_area_struct;
  extern unsigned long kimage_vaddr;
+static inline unsigned long kaslr_offset(void)
+{
+    return kimage_vaddr - KERNELBASE;
+}
+
  #include 
  #endif /* __ASSEMBLY__ */
  #include 
diff --git a/arch/powerpc/kernel/machine_kexec.c 
b/arch/powerpc/kernel/machine_kexec.c

index c4ed328a7b96..078fe3d76feb 100644
--- a/arch/powerpc/kernel/machine_kexec.c
+++ b/arch/powerpc/kernel/machine_kexec.c
@@ -86,6 +86,7 @@ void arch_crash_save_vmcoreinfo(void)
  VMCOREINFO_STRUCT_SIZE(mmu_psize_def);
  VMCOREINFO_OFFSET(mmu_psize_def, shift);
  #endif
+    vmcoreinfo_append_str("KERNELOFFSET=%lx\n", kaslr_offset());
  }
  /*
diff --git a/arch/powerpc/kernel/setup-common.c 
b/arch/powerpc/kernel/setup-common.c

index 1f8db666468d..49e540c0adeb 100644
--- a/arch/powerpc/kernel/setup-common.c
+++ b/arch/powerpc/kernel/setup-common.c
@@ -715,12 +715,35 @@ static struct notifier_block ppc_panic_block = {
  .priority = INT_MIN /* may not return; must be done last */
  };
+/*
+ * Dump out kernel offset information on panic.
+ */
+static int dump_kernel_offset(struct notifier_block *self, unsigned 
long v,

+  void *p)
+{
+    const unsigned long offset = kaslr_offset();
+
+    if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && offset > 0)
+    pr_emerg("Kernel Offset: 0x%lx from 0x%lx\n",
+ offset, KERNELBASE);
+    else
+    pr_emerg("Kernel Offset: disabled\n");


Do we really need that else branch ?

Why not just make the below atomic_notifier_chain_register() 
conditionnal to IS_ENABLED(CONFIG_RANDOMIZE_BASE) && offset > 0

and not print anything otherwise ?



I'm trying to keep the same fashion as x86/arm64 do. But I agree
with you that it's simpler to not print anything else if not randomized.


Christophe


+
+    return 0;
+}
+
+static struct notifier_block kernel_offset_notifier = {
+    .notifier_call = dump_kernel_offset
+};
+
  void __init setup_panic(void)
  {
  /* PPC64 always does a hard irq disable in its panic handler */
  if (!IS_ENABLED(CONFIG_PPC64) && !ppc_md.panic)
  return;
  atomic_notifier_chain_register(_notifier_list, 
_panic_block);

+    atomic_notifier_chain_register(_notifier_list,
+   _offset_notifier);
  }
  #ifdef CONFIG_CHECK_CACHE_COHERENCY



.





Re: [RFC PATCH 09/10] powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter

2019-07-29 Thread Jason Yan




On 2019/7/29 19:38, Christophe Leroy wrote:



Le 17/07/2019 à 10:06, Jason Yan a écrit :

One may want to disable kaslr when boot, so provide a cmdline parameter
'nokaslr' to support this.

Signed-off-by: Jason Yan 
Cc: Diana Craciun 
Cc: Michael Ellerman 
Cc: Christophe Leroy 
Cc: Benjamin Herrenschmidt 
Cc: Paul Mackerras 
Cc: Nicholas Piggin 
Cc: Kees Cook 
---
  arch/powerpc/kernel/kaslr_booke.c | 14 ++
  1 file changed, 14 insertions(+)

diff --git a/arch/powerpc/kernel/kaslr_booke.c 
b/arch/powerpc/kernel/kaslr_booke.c

index 00339c05879f..e65a5d9d2ff1 100644
--- a/arch/powerpc/kernel/kaslr_booke.c
+++ b/arch/powerpc/kernel/kaslr_booke.c
@@ -373,6 +373,18 @@ static unsigned long __init 
kaslr_choose_location(void *dt_ptr, phys_addr_t size

  return kaslr_offset;
  }
+static inline __init bool kaslr_disabled(void)
+{
+    char *str;
+
+    str = strstr(early_command_line, "nokaslr");


Why using early_command_line instead of boot_command_line ?



Will switch to boot_command_line.




+    if ((str == early_command_line) ||
+    (str > early_command_line && *(str - 1) == ' '))


Is that stuff really needed ?

Why not just:

return strstr(early_command_line, "nokaslr") != NULL;



This code is derived from other arch such as arm64/mips. It's trying to
make sure that 'nokaslr' is a separate word but not part of other words 
such as 'abcnokaslr'.



+    return true;
+
+    return false;
+}




+
  /*
   * To see if we need to relocate the kernel to a random offset
   * void *dt_ptr - address of the device tree
@@ -388,6 +400,8 @@ notrace void __init kaslr_early_init(void *dt_ptr, 
phys_addr_t size)

  kernel_sz = (unsigned long)_end - KERNELBASE;
  kaslr_get_cmdline(dt_ptr);
+    if (kaslr_disabled())
+    return;
  offset = kaslr_choose_location(dt_ptr, size, kernel_sz);



Christophe

.





Re: [RFC PATCH 07/10] powerpc/fsl_booke/32: randomize the kernel image offset

2019-07-29 Thread Jason Yan



On 2019/7/29 19:33, Christophe Leroy wrote:



Le 17/07/2019 à 10:06, Jason Yan a écrit :

After we have the basic support of relocate the kernel in some
appropriate place, we can start to randomize the offset now.

Entropy is derived from the banner and timer, which will change every
build and boot. This not so much safe so additionally the bootloader may
pass entropy via the /chosen/kaslr-seed node in device tree.

We will use the first 512M of the low memory to randomize the kernel
image. The memory will be split in 64M zones. We will use the lower 8
bit of the entropy to decide the index of the 64M zone. Then we chose a
16K aligned offset inside the 64M zone to put the kernel in.

 KERNELBASE

 |-->   64M   <--|
 |   |
 +---+    ++---+
 |   ||    |kernel|    |   |
 +---+    ++---+
 | |
 |->   offset    <-|

   kimage_vaddr

We also check if we will overlap with some areas like the dtb area, the
initrd area or the crashkernel area. If we cannot find a proper area,
kaslr will be disabled and boot from the original kernel.

Signed-off-by: Jason Yan 
Cc: Diana Craciun 
Cc: Michael Ellerman 
Cc: Christophe Leroy 
Cc: Benjamin Herrenschmidt 
Cc: Paul Mackerras 
Cc: Nicholas Piggin 
Cc: Kees Cook 
---
  arch/powerpc/kernel/kaslr_booke.c | 335 +-
  1 file changed, 333 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kernel/kaslr_booke.c 
b/arch/powerpc/kernel/kaslr_booke.c

index 72d8e9432048..90357f4bd313 100644
--- a/arch/powerpc/kernel/kaslr_booke.c
+++ b/arch/powerpc/kernel/kaslr_booke.c
@@ -22,6 +22,8 @@
  #include 
  #include 
  #include 
+#include 
+#include 
  #include 
  #include 
  #include 
@@ -33,15 +35,342 @@
  #include 
  #include 
  #include 
+#include 
  #include 
+#include 
+#include 
+
+#ifdef DEBUG
+#define DBG(fmt...) printk(KERN_ERR fmt)
+#else
+#define DBG(fmt...)
+#endif
+
+struct regions {
+    unsigned long pa_start;
+    unsigned long pa_end;
+    unsigned long kernel_size;
+    unsigned long dtb_start;
+    unsigned long dtb_end;
+    unsigned long initrd_start;
+    unsigned long initrd_end;
+    unsigned long crash_start;
+    unsigned long crash_end;
+    int reserved_mem;
+    int reserved_mem_addr_cells;
+    int reserved_mem_size_cells;
+};
  extern int is_second_reloc;
+/* Simplified build-specific string for starting entropy. */
+static const char build_str[] = UTS_RELEASE " (" LINUX_COMPILE_BY "@"
+    LINUX_COMPILE_HOST ") (" LINUX_COMPILER ") " UTS_VERSION;
+static char __initdata early_command_line[COMMAND_LINE_SIZE];
+
+static __init void kaslr_get_cmdline(void *fdt)
+{
+    const char *cmdline = CONFIG_CMDLINE;
+    if (!IS_ENABLED(CONFIG_CMDLINE_FORCE)) {
+    int node;
+    const u8 *prop;
+    node = fdt_path_offset(fdt, "/chosen");
+    if (node < 0)
+    goto out;
+
+    prop = fdt_getprop(fdt, node, "bootargs", NULL);
+    if (!prop)
+    goto out;
+    cmdline = prop;
+    }
+out:
+    strscpy(early_command_line, cmdline, COMMAND_LINE_SIZE);
+}
+


Can you explain why we need that and can't use the already existing 
cmdline stuff ?




I'm afraid of breaking the other initializing code of the cmdline
buffer at first. I will have a try to use it to see if there is any 
problems.



Christophe

+static unsigned long __init rotate_xor(unsigned long hash, const void 
*area,

+    size_t size)
+{
+    size_t i;
+    unsigned long *ptr = (unsigned long *)area;
+
+    for (i = 0; i < size / sizeof(hash); i++) {
+    /* Rotate by odd number of bits and XOR. */
+    hash = (hash << ((sizeof(hash) * 8) - 7)) | (hash >> 7);
+    hash ^= ptr[i];
+    }
+
+    return hash;
+}
+
+/* Attempt to create a simple but unpredictable starting entropy. */
+static unsigned long __init get_boot_seed(void *fdt)
+{
+    unsigned long hash = 0;
+
+    hash = rotate_xor(hash, build_str, sizeof(build_str));
+    hash = rotate_xor(hash, fdt, fdt_totalsize(fdt));
+
+    return hash;
+}
+
+static __init u64 get_kaslr_seed(void *fdt)
+{
+    int node, len;
+    fdt64_t *prop;
+    u64 ret;
+
+    node = fdt_path_offset(fdt, "/chosen");
+    if (node < 0)
+    return 0;
+
+    prop = fdt_getprop_w(fdt, node, "kaslr-seed", );
+    if (!prop || len != sizeof(u64))
+    return 0;
+
+    ret = fdt64_to_cpu(*prop);
+    *prop = 0;
+    return ret;
+}
+
+static __init bool regions_overlap(u32 s1, u32 e1, u32 s2, u32 e2)
+{
+    return e1 >= s2 && e2 >= s1;
+}
+
+static __init bool overlaps_reserved_region(const void *fdt, u32 start,
+   u32 end, struct regions *regions)
+{
+    int subnode, len, i;
+    u64 base, size;
+
+    /* check for overlap with /memreserve/ entries */
+    for (i = 0; i < fdt_num_mem_rsv(fdt); i++) {
+ 

Re: [RFC PATCH 08/10] powerpc/fsl_booke/kaslr: clear the original kernel if randomized

2019-07-29 Thread Jason Yan



On 2019/7/29 19:19, Christophe Leroy wrote:



Le 17/07/2019 à 10:06, Jason Yan a écrit :

The original kernel still exists in the memory, clear it now.

Signed-off-by: Jason Yan 
Cc: Diana Craciun 
Cc: Michael Ellerman 
Cc: Christophe Leroy 
Cc: Benjamin Herrenschmidt 
Cc: Paul Mackerras 
Cc: Nicholas Piggin 
Cc: Kees Cook 
---
  arch/powerpc/kernel/kaslr_booke.c  | 11 +++
  arch/powerpc/mm/mmu_decl.h |  2 ++
  arch/powerpc/mm/nohash/fsl_booke.c |  1 +
  3 files changed, 14 insertions(+)

diff --git a/arch/powerpc/kernel/kaslr_booke.c 
b/arch/powerpc/kernel/kaslr_booke.c

index 90357f4bd313..00339c05879f 100644
--- a/arch/powerpc/kernel/kaslr_booke.c
+++ b/arch/powerpc/kernel/kaslr_booke.c
@@ -412,3 +412,14 @@ notrace void __init kaslr_early_init(void 
*dt_ptr, phys_addr_t size)

  reloc_kernel_entry(dt_ptr, kimage_vaddr);
  }
+
+void __init kaslr_second_init(void)
+{
+    /* If randomized, clear the original kernel */
+    if (kimage_vaddr != KERNELBASE) {
+    unsigned long kernel_sz;
+
+    kernel_sz = (unsigned long)_end - kimage_vaddr;
+    memset((void *)KERNELBASE, 0, kernel_sz);


Why are we clearing ? Is that just to tidy up or is it of security 
importance ?




If we leave it there, attackers can still find the kernel object very
easy, it's still dangerous especially if without 
CONFIG_STRICT_KERNEL_RWX enabled.



If so, maybe memzero_explicit() should be used instead ?



OK


+    }
+}
diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
index 754ae1e69f92..9912ee598f9b 100644
--- a/arch/powerpc/mm/mmu_decl.h
+++ b/arch/powerpc/mm/mmu_decl.h
@@ -150,8 +150,10 @@ extern void loadcam_multi(int first_idx, int num, 
int tmp_idx);

  #ifdef CONFIG_RANDOMIZE_BASE
  extern void kaslr_early_init(void *dt_ptr, phys_addr_t size);
+extern void kaslr_second_init(void);


No new 'extern' please.


  #else
  static inline void kaslr_early_init(void *dt_ptr, phys_addr_t size) {}
+static inline void kaslr_second_init(void) {}
  #endif
  struct tlbcam {
diff --git a/arch/powerpc/mm/nohash/fsl_booke.c 
b/arch/powerpc/mm/nohash/fsl_booke.c

index 8d25a8dc965f..fa5a87f5c08e 100644
--- a/arch/powerpc/mm/nohash/fsl_booke.c
+++ b/arch/powerpc/mm/nohash/fsl_booke.c
@@ -269,6 +269,7 @@ notrace void __init relocate_init(u64 dt_ptr, 
phys_addr_t start)

  kernstart_addr = start;
  if (is_second_reloc) {
  virt_phys_offset = PAGE_OFFSET - memstart_addr;
+    kaslr_second_init();
  return;
  }



Christophe

.





Re: [RFC PATCH 05/10] powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper

2019-07-29 Thread Jason Yan



On 2019/7/29 19:08, Christophe Leroy wrote:



Le 17/07/2019 à 10:06, Jason Yan a écrit :

Add a new helper reloc_kernel_entry() to jump back to the start of the
new kernel. After we put the new kernel in a randomized place we can use
this new helper to enter the kernel and begin to relocate again.

Signed-off-by: Jason Yan 
Cc: Diana Craciun 
Cc: Michael Ellerman 
Cc: Christophe Leroy 
Cc: Benjamin Herrenschmidt 
Cc: Paul Mackerras 
Cc: Nicholas Piggin 
Cc: Kees Cook 
---
  arch/powerpc/kernel/head_fsl_booke.S | 16 
  arch/powerpc/mm/mmu_decl.h   |  1 +
  2 files changed, 17 insertions(+)

diff --git a/arch/powerpc/kernel/head_fsl_booke.S 
b/arch/powerpc/kernel/head_fsl_booke.S

index a57d44638031..ce40f96dae20 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -1144,6 +1144,22 @@ _GLOBAL(create_tlb_entry)
  sync
  blr
+/*
+ * Return to the start of the relocated kernel and run again
+ * r3 - virtual address of fdt
+ * r4 - entry of the kernel
+ */
+_GLOBAL(reloc_kernel_entry)
+    mfmsr    r7
+    li    r8,(MSR_IS | MSR_DS)
+    andc    r7,r7,r8


Instead of the li/andc, what about the following:

rlwinm r7, r7, 0, ~(MSR_IS | MSR_DS)



Good idea.


+
+    mtspr    SPRN_SRR0,r4
+    mtspr    SPRN_SRR1,r7
+    isync
+    sync
+    rfi


Are the isync/sync really necessary ? AFAIK, rfi is context synchronising.



I see some code with sync before rfi so I'm not sure. I will check this
and drop the isync/sync if it's true.

Thanks.


+
  /*
   * Create a tlb entry with the same effective and physical address as
   * the tlb entry used by the current running code. But set the TS to 1.
diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
index d7737cf97cee..dae8e9177574 100644
--- a/arch/powerpc/mm/mmu_decl.h
+++ b/arch/powerpc/mm/mmu_decl.h
@@ -143,6 +143,7 @@ extern void adjust_total_lowmem(void);
  extern int switch_to_as1(void);
  extern void restore_to_as0(int esel, int offset, void *dt_ptr, int 
bootcpu);
  extern void create_tlb_entry(phys_addr_t phys, unsigned long virt, 
int entry);

+extern void reloc_kernel_entry(void *fdt, int addr);


No new 'extern' please, see 
https://openpower.xyz/job/snowpatch/job/snowpatch-linux-checkpatch/8125//artifact/linux/checkpatch.log 





  #endif
  extern void loadcam_entry(unsigned int index);
  extern void loadcam_multi(int first_idx, int num, int tmp_idx);



Christophe

.





Re: [RFC PATCH 04/10] powerpc/fsl_booke/32: introduce create_tlb_entry() helper

2019-07-29 Thread Jason Yan



On 2019/7/29 19:05, Christophe Leroy wrote:



Le 17/07/2019 à 10:06, Jason Yan a écrit :

Add a new helper create_tlb_entry() to create a tlb entry by the virtual
and physical address. This is a preparation to support boot kernel at a
randomized address.

Signed-off-by: Jason Yan 
Cc: Diana Craciun 
Cc: Michael Ellerman 
Cc: Christophe Leroy 
Cc: Benjamin Herrenschmidt 
Cc: Paul Mackerras 
Cc: Nicholas Piggin 
Cc: Kees Cook 
---
  arch/powerpc/kernel/head_fsl_booke.S | 30 
  arch/powerpc/mm/mmu_decl.h   |  1 +
  2 files changed, 31 insertions(+)

diff --git a/arch/powerpc/kernel/head_fsl_booke.S 
b/arch/powerpc/kernel/head_fsl_booke.S

index adf0505dbe02..a57d44638031 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -1114,6 +1114,36 @@ __secondary_hold_acknowledge:
  .long    -1
  #endif
+/*
+ * Create a 64M tlb by address and entry
+ * r3/r4 - physical address
+ * r5 - virtual address
+ * r6 - entry
+ */
+_GLOBAL(create_tlb_entry)
+    lis r7,0x1000   /* Set MAS0(TLBSEL) = 1 */
+    rlwimi  r7,r6,16,4,15   /* Setup MAS0 = TLBSEL | ESEL(r6) */
+    mtspr   SPRN_MAS0,r7    /* Write MAS0 */
+
+    lis r6,(MAS1_VALID|MAS1_IPROT)@h
+    ori r6,r6,(MAS1_TSIZE(BOOK3E_PAGESZ_64M))@l
+    mtspr   SPRN_MAS1,r6    /* Write MAS1 */
+
+    lis r6,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@h
+    ori r6,r6,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@l
+    and r6,r6,r5
+    ori    r6,r6,MAS2_M@l
+    mtspr   SPRN_MAS2,r6    /* Write MAS2(EPN) */
+
+    mr  r8,r4
+    ori r8,r8,(MAS3_SW|MAS3_SR|MAS3_SX)


Could drop the mr r8, r4 and do:

ori r8,r4,(MAS3_SW|MAS3_SR|MAS3_SX)



OK, thanks for the suggestion.


+    mtspr   SPRN_MAS3,r8    /* Write MAS3(RPN) */
+
+    tlbwe   /* Write TLB */
+    isync
+    sync
+    blr
+
  /*
   * Create a tlb entry with the same effective and physical address as
   * the tlb entry used by the current running code. But set the TS to 1.
diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
index 32c1a191c28a..d7737cf97cee 100644
--- a/arch/powerpc/mm/mmu_decl.h
+++ b/arch/powerpc/mm/mmu_decl.h
@@ -142,6 +142,7 @@ extern unsigned long calc_cam_sz(unsigned long 
ram, unsigned long virt,

  extern void adjust_total_lowmem(void);
  extern int switch_to_as1(void);
  extern void restore_to_as0(int esel, int offset, void *dt_ptr, int 
bootcpu);
+extern void create_tlb_entry(phys_addr_t phys, unsigned long virt, 
int entry);


Please please do not add new declarations with the useless 'extern' 
keyword. See checkpatch report: 
https://openpower.xyz/job/snowpatch/job/snowpatch-linux-checkpatch/8124//artifact/linux/checkpatch.log 



Will drop all useless 'extern' in this and other patches, thanks.




  #endif
  extern void loadcam_entry(unsigned int index);
  extern void loadcam_multi(int first_idx, int num, int tmp_idx);



.





Re: [PATCH] powerpc: Use nid as fallback for chip_id

2019-07-29 Thread Michael Ellerman
Srikar Dronamraju  writes:
> One of the uses of chip_id is to find out all cores that are part of the same
> chip. However ibm,chip_id property is not present in device-tree of PowerVM
> Lpars. Hence lscpu output shows one core per socket and multiple cores.
>
> Before the patch.
> # lscpu
> Architecture:ppc64le
> Byte Order:  Little Endian
> CPU(s):  128
> On-line CPU(s) list: 0-127
> Thread(s) per core:  8
> Core(s) per socket:  1
> Socket(s):   16
> NUMA node(s):2
> Model:   2.2 (pvr 004e 0202)
> Model name:  POWER9 (architected), altivec supported
> Hypervisor vendor:   pHyp
> Virtualization type: para
> L1d cache:   32K
> L1i cache:   32K
> L2 cache:512K
> L3 cache:10240K
> NUMA node0 CPU(s):   0-63
> NUMA node1 CPU(s):   64-127
>
> # cat /sys/devices/system/cpu/cpu0/topology/physical_package_id
> -1
>
> Signed-off-by: Srikar Dronamraju 
> ---
>  arch/powerpc/kernel/prom.c | 10 --
>  1 file changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
> index 7159e791a70d..0b8918b43580 100644
> --- a/arch/powerpc/kernel/prom.c
> +++ b/arch/powerpc/kernel/prom.c
> @@ -867,18 +867,24 @@ EXPORT_SYMBOL(of_get_ibm_chip_id);
>   * @cpu: The logical cpu number.
>   *
>   * Return the value of the ibm,chip-id property corresponding to the given
> - * logical cpu number. If the chip-id can not be found, returns -1.
> + * logical cpu number. If the chip-id can not be found, return nid.
> + *
>   */
>  int cpu_to_chip_id(int cpu)
>  {
>   struct device_node *np;
> + int chip_id = -1;
>  
>   np = of_get_cpu_node(cpu, NULL);
>   if (!np)
>   return -1;
>  
> + chip_id = of_get_ibm_chip_id(np);
> + if (chip_id == -1)
> + chip_id = of_node_to_nid(np);
> +
>   of_node_put(np);
> - return of_get_ibm_chip_id(np);
> + return chip_id;
>  }

A nid is not a chip-id.

This obviously happens to work for the case you've identified above but
it's not something I'm happy to merge in general.

We could do a similar change in the topology code, but I'd probably like
it to be restricted to when we're running under PowerVM and there are no
chip-ids found at all.

I'm also not clear how it will interact with migration.

cheers


Re: [PATCH v2] powerpc/imc: Dont create debugfs files for cpu-less nodes

2019-07-29 Thread Michael Ellerman
Anju T Sudhakar  writes:
> Hi Qian,
>
> On 7/16/19 12:11 AM, Qian Cai wrote:
>> On Thu, 2019-07-11 at 14:53 +1000, Michael Ellerman wrote:
>>> Hi Maddy,
>>>
>>> Madhavan Srinivasan  writes:
 diff --git a/arch/powerpc/platforms/powernv/opal-imc.c
 b/arch/powerpc/platforms/powernv/opal-imc.c
 index 186109bdd41b..e04b20625cb9 100644
 --- a/arch/powerpc/platforms/powernv/opal-imc.c
 +++ b/arch/powerpc/platforms/powernv/opal-imc.c
 @@ -69,20 +69,20 @@ static void export_imc_mode_and_cmd(struct device_node
 *node,
    if (of_property_read_u32(node, "cb_offset", _offset))
    cb_offset = IMC_CNTL_BLK_OFFSET;
   
 -  for_each_node(nid) {
 -  loc = (u64)(pmu_ptr->mem_info[chip].vbase) + cb_offset;
 +  while (ptr->vbase != NULL) {
>>> This means you'll bail out as soon as you find a node with no vbase, but
>>> it's possible we could have a CPU-less node intermingled with other
>>> nodes.
>>>
>>> So I think you want to keep the for loop, but continue if you see a NULL
>>> vbase?
>> Not sure if this will also takes care of some of those messages during the 
>> boot
>> on today's linux-next even without this patch.
>>
>>
>> [   18.077780][T1] debugfs: Directory 'imc' with parent 'powerpc' already
>> present!
>>
>>
>
> This is introduced by a recent commit: c33d442328f55 (debugfs: make 
> error message a bit more verbose).
>
> So basically, the debugfs imc_* file is created per node, and is created 
> by the first nest unit which is
>
> being registered. For the subsequent nest units, debugfs_create_dir() 
> will just return since the imc_* file already
>
> exist.
>
> The commit "c33d442328f55 (debugfs: make error message a bit more 
> verbose)", prints
>
> a message if the debugfs file already exists in debugfs_create_dir(). 
> That is why we are encountering these
>
> messages now.
>
>
> This patch (i.e, powerpc/imc: Dont create debugfs files for cpu-less 
> nodes) will address the initial issue, i.e
>
> "numa crash while reading imc_* debugfs files for cpu less nodes", and 
> will not address these debugfs messages.
>
>
> But yeah this is a good catch. We can have some checks to avoid these 
> debugfs messages.
>
>
> Hi Michael,
>
> Do we need to have a separate patch to address these debugfs messages, 
> or can we address the same
>
> in the next version of this patch itself?

No, please do one logical change per patch.

cheers


Re: [PATCH 4/5] powerpc/64: Add VIRTUAL_BUG_ON checks for __va and __pa addresses

2019-07-29 Thread Christophe Leroy




Le 24/07/2019 à 10:46, Nicholas Piggin a écrit :

Ensure __va is given a physical address below PAGE_OFFSET, and __pa is
given a virtual address above PAGE_OFFSET.

Signed-off-by: Nicholas Piggin 
---
  arch/powerpc/include/asm/page.h | 14 --
  1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
index 0d52f57fca04..c8bb14ff4713 100644
--- a/arch/powerpc/include/asm/page.h
+++ b/arch/powerpc/include/asm/page.h
@@ -215,9 +215,19 @@ static inline bool pfn_valid(unsigned long pfn)
  /*
   * gcc miscompiles (unsigned long)(_var) - PAGE_OFFSET
   * with -mcmodel=medium, so we use & and | instead of - and + on 64-bit.
+ * This also results in better code generation.
   */
-#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) | PAGE_OFFSET))
-#define __pa(x) ((unsigned long)(x) & 0x0fffUL)
+#define __va(x)
\
+({ \
+   VIRTUAL_BUG_ON((unsigned long)(x) >= PAGE_OFFSET);   \


Do we really want to add a BUG_ON here ?
Can't we just add a WARN_ON, like in 
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/arch/powerpc/include/asm/io.h?id=6bf752daca07c85c181159f75dcf65b12056883b 
?



+   (void *)(unsigned long)((phys_addr_t)(x) | PAGE_OFFSET);\
+})
+
+#define __pa(x)
\
+({ \
+   VIRTUAL_BUG_ON((unsigned long)(x) < PAGE_OFFSET);\


Same


+   (unsigned long)(x) & 0x0fffUL;  \
+})
  
  #else /* 32-bit, non book E */

  #define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) + PAGE_OFFSET - 
MEMORY_START))



Would it be possible to change those macros into static inlines ?

Christophe


Re: [RFC PATCH 10/10] powerpc/fsl_booke/kaslr: dump out kernel offset information on panic

2019-07-29 Thread Christophe Leroy




Le 17/07/2019 à 10:06, Jason Yan a écrit :

When kaslr is enabled, the kernel offset is different for every boot.
This brings some difficult to debug the kernel. Dump out the kernel
offset when panic so that we can easily debug the kernel.

Signed-off-by: Jason Yan 
Cc: Diana Craciun 
Cc: Michael Ellerman 
Cc: Christophe Leroy 
Cc: Benjamin Herrenschmidt 
Cc: Paul Mackerras 
Cc: Nicholas Piggin 
Cc: Kees Cook 
---
  arch/powerpc/include/asm/page.h |  5 +
  arch/powerpc/kernel/machine_kexec.c |  1 +
  arch/powerpc/kernel/setup-common.c  | 23 +++
  3 files changed, 29 insertions(+)

diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
index 60a68d3a54b1..cd3ac530e58d 100644
--- a/arch/powerpc/include/asm/page.h
+++ b/arch/powerpc/include/asm/page.h
@@ -317,6 +317,11 @@ struct vm_area_struct;
  
  extern unsigned long kimage_vaddr;
  
+static inline unsigned long kaslr_offset(void)

+{
+   return kimage_vaddr - KERNELBASE;
+}
+
  #include 
  #endif /* __ASSEMBLY__ */
  #include 
diff --git a/arch/powerpc/kernel/machine_kexec.c 
b/arch/powerpc/kernel/machine_kexec.c
index c4ed328a7b96..078fe3d76feb 100644
--- a/arch/powerpc/kernel/machine_kexec.c
+++ b/arch/powerpc/kernel/machine_kexec.c
@@ -86,6 +86,7 @@ void arch_crash_save_vmcoreinfo(void)
VMCOREINFO_STRUCT_SIZE(mmu_psize_def);
VMCOREINFO_OFFSET(mmu_psize_def, shift);
  #endif
+   vmcoreinfo_append_str("KERNELOFFSET=%lx\n", kaslr_offset());
  }
  
  /*

diff --git a/arch/powerpc/kernel/setup-common.c 
b/arch/powerpc/kernel/setup-common.c
index 1f8db666468d..49e540c0adeb 100644
--- a/arch/powerpc/kernel/setup-common.c
+++ b/arch/powerpc/kernel/setup-common.c
@@ -715,12 +715,35 @@ static struct notifier_block ppc_panic_block = {
.priority = INT_MIN /* may not return; must be done last */
  };
  
+/*

+ * Dump out kernel offset information on panic.
+ */
+static int dump_kernel_offset(struct notifier_block *self, unsigned long v,
+ void *p)
+{
+   const unsigned long offset = kaslr_offset();
+
+   if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && offset > 0)
+   pr_emerg("Kernel Offset: 0x%lx from 0x%lx\n",
+offset, KERNELBASE);
+   else
+   pr_emerg("Kernel Offset: disabled\n");


Do we really need that else branch ?

Why not just make the below atomic_notifier_chain_register() 
conditionnal to IS_ENABLED(CONFIG_RANDOMIZE_BASE) && offset > 0

and not print anything otherwise ?

Christophe


+
+   return 0;
+}
+
+static struct notifier_block kernel_offset_notifier = {
+   .notifier_call = dump_kernel_offset
+};
+
  void __init setup_panic(void)
  {
/* PPC64 always does a hard irq disable in its panic handler */
if (!IS_ENABLED(CONFIG_PPC64) && !ppc_md.panic)
return;
atomic_notifier_chain_register(_notifier_list, _panic_block);
+   atomic_notifier_chain_register(_notifier_list,
+  _offset_notifier);
  }
  
  #ifdef CONFIG_CHECK_CACHE_COHERENCY




Re: [RFC PATCH 09/10] powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter

2019-07-29 Thread Christophe Leroy




Le 17/07/2019 à 10:06, Jason Yan a écrit :

One may want to disable kaslr when boot, so provide a cmdline parameter
'nokaslr' to support this.

Signed-off-by: Jason Yan 
Cc: Diana Craciun 
Cc: Michael Ellerman 
Cc: Christophe Leroy 
Cc: Benjamin Herrenschmidt 
Cc: Paul Mackerras 
Cc: Nicholas Piggin 
Cc: Kees Cook 
---
  arch/powerpc/kernel/kaslr_booke.c | 14 ++
  1 file changed, 14 insertions(+)

diff --git a/arch/powerpc/kernel/kaslr_booke.c 
b/arch/powerpc/kernel/kaslr_booke.c
index 00339c05879f..e65a5d9d2ff1 100644
--- a/arch/powerpc/kernel/kaslr_booke.c
+++ b/arch/powerpc/kernel/kaslr_booke.c
@@ -373,6 +373,18 @@ static unsigned long __init kaslr_choose_location(void 
*dt_ptr, phys_addr_t size
return kaslr_offset;
  }
  
+static inline __init bool kaslr_disabled(void)

+{
+   char *str;
+
+   str = strstr(early_command_line, "nokaslr");


Why using early_command_line instead of boot_command_line ?



+   if ((str == early_command_line) ||
+   (str > early_command_line && *(str - 1) == ' '))


Is that stuff really needed ?

Why not just:

return strstr(early_command_line, "nokaslr") != NULL;


+   return true;
+
+   return false;
+}




+
  /*
   * To see if we need to relocate the kernel to a random offset
   * void *dt_ptr - address of the device tree
@@ -388,6 +400,8 @@ notrace void __init kaslr_early_init(void *dt_ptr, 
phys_addr_t size)
kernel_sz = (unsigned long)_end - KERNELBASE;
  
  	kaslr_get_cmdline(dt_ptr);

+   if (kaslr_disabled())
+   return;
  
  	offset = kaslr_choose_location(dt_ptr, size, kernel_sz);
  



Christophe


Re: [RFC PATCH 07/10] powerpc/fsl_booke/32: randomize the kernel image offset

2019-07-29 Thread Christophe Leroy




Le 17/07/2019 à 10:06, Jason Yan a écrit :

After we have the basic support of relocate the kernel in some
appropriate place, we can start to randomize the offset now.

Entropy is derived from the banner and timer, which will change every
build and boot. This not so much safe so additionally the bootloader may
pass entropy via the /chosen/kaslr-seed node in device tree.

We will use the first 512M of the low memory to randomize the kernel
image. The memory will be split in 64M zones. We will use the lower 8
bit of the entropy to decide the index of the 64M zone. Then we chose a
16K aligned offset inside the 64M zone to put the kernel in.

 KERNELBASE

 |-->   64M   <--|
 |   |
 +---+++---+
 |   |||kernel||   |
 +---+++---+
 | |
 |->   offset<-|

   kimage_vaddr

We also check if we will overlap with some areas like the dtb area, the
initrd area or the crashkernel area. If we cannot find a proper area,
kaslr will be disabled and boot from the original kernel.

Signed-off-by: Jason Yan 
Cc: Diana Craciun 
Cc: Michael Ellerman 
Cc: Christophe Leroy 
Cc: Benjamin Herrenschmidt 
Cc: Paul Mackerras 
Cc: Nicholas Piggin 
Cc: Kees Cook 
---
  arch/powerpc/kernel/kaslr_booke.c | 335 +-
  1 file changed, 333 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kernel/kaslr_booke.c 
b/arch/powerpc/kernel/kaslr_booke.c
index 72d8e9432048..90357f4bd313 100644
--- a/arch/powerpc/kernel/kaslr_booke.c
+++ b/arch/powerpc/kernel/kaslr_booke.c
@@ -22,6 +22,8 @@
  #include 
  #include 
  #include 
+#include 
+#include 
  #include 
  #include 
  #include 
@@ -33,15 +35,342 @@
  #include 
  #include 
  #include 
+#include 
  #include 
+#include 
+#include 
+
+#ifdef DEBUG
+#define DBG(fmt...) printk(KERN_ERR fmt)
+#else
+#define DBG(fmt...)
+#endif
+
+struct regions {
+   unsigned long pa_start;
+   unsigned long pa_end;
+   unsigned long kernel_size;
+   unsigned long dtb_start;
+   unsigned long dtb_end;
+   unsigned long initrd_start;
+   unsigned long initrd_end;
+   unsigned long crash_start;
+   unsigned long crash_end;
+   int reserved_mem;
+   int reserved_mem_addr_cells;
+   int reserved_mem_size_cells;
+};
  
  extern int is_second_reloc;
  
+/* Simplified build-specific string for starting entropy. */

+static const char build_str[] = UTS_RELEASE " (" LINUX_COMPILE_BY "@"
+   LINUX_COMPILE_HOST ") (" LINUX_COMPILER ") " UTS_VERSION;
+static char __initdata early_command_line[COMMAND_LINE_SIZE];
+
+static __init void kaslr_get_cmdline(void *fdt)
+{
+   const char *cmdline = CONFIG_CMDLINE;
+   if (!IS_ENABLED(CONFIG_CMDLINE_FORCE)) {
+   int node;
+   const u8 *prop;
+   node = fdt_path_offset(fdt, "/chosen");
+   if (node < 0)
+   goto out;
+
+   prop = fdt_getprop(fdt, node, "bootargs", NULL);
+   if (!prop)
+   goto out;
+   cmdline = prop;
+   }
+out:
+   strscpy(early_command_line, cmdline, COMMAND_LINE_SIZE);
+}
+


Can you explain why we need that and can't use the already existing 
cmdline stuff ?


Christophe


+static unsigned long __init rotate_xor(unsigned long hash, const void *area,
+   size_t size)
+{
+   size_t i;
+   unsigned long *ptr = (unsigned long *)area;
+
+   for (i = 0; i < size / sizeof(hash); i++) {
+   /* Rotate by odd number of bits and XOR. */
+   hash = (hash << ((sizeof(hash) * 8) - 7)) | (hash >> 7);
+   hash ^= ptr[i];
+   }
+
+   return hash;
+}
+
+/* Attempt to create a simple but unpredictable starting entropy. */
+static unsigned long __init get_boot_seed(void *fdt)
+{
+   unsigned long hash = 0;
+
+   hash = rotate_xor(hash, build_str, sizeof(build_str));
+   hash = rotate_xor(hash, fdt, fdt_totalsize(fdt));
+
+   return hash;
+}
+
+static __init u64 get_kaslr_seed(void *fdt)
+{
+   int node, len;
+   fdt64_t *prop;
+   u64 ret;
+
+   node = fdt_path_offset(fdt, "/chosen");
+   if (node < 0)
+   return 0;
+
+   prop = fdt_getprop_w(fdt, node, "kaslr-seed", );
+   if (!prop || len != sizeof(u64))
+   return 0;
+
+   ret = fdt64_to_cpu(*prop);
+   *prop = 0;
+   return ret;
+}
+
+static __init bool regions_overlap(u32 s1, u32 e1, u32 s2, u32 e2)
+{
+   return e1 >= s2 && e2 >= s1;
+}
+
+static __init bool overlaps_reserved_region(const void *fdt, u32 start,
+  u32 end, struct regions *regions)
+{
+   int subnode, len, i;
+   u64 base, size;
+
+   /* check for overlap with 

Re: [RFC PATCH 08/10] powerpc/fsl_booke/kaslr: clear the original kernel if randomized

2019-07-29 Thread Christophe Leroy




Le 17/07/2019 à 10:06, Jason Yan a écrit :

The original kernel still exists in the memory, clear it now.

Signed-off-by: Jason Yan 
Cc: Diana Craciun 
Cc: Michael Ellerman 
Cc: Christophe Leroy 
Cc: Benjamin Herrenschmidt 
Cc: Paul Mackerras 
Cc: Nicholas Piggin 
Cc: Kees Cook 
---
  arch/powerpc/kernel/kaslr_booke.c  | 11 +++
  arch/powerpc/mm/mmu_decl.h |  2 ++
  arch/powerpc/mm/nohash/fsl_booke.c |  1 +
  3 files changed, 14 insertions(+)

diff --git a/arch/powerpc/kernel/kaslr_booke.c 
b/arch/powerpc/kernel/kaslr_booke.c
index 90357f4bd313..00339c05879f 100644
--- a/arch/powerpc/kernel/kaslr_booke.c
+++ b/arch/powerpc/kernel/kaslr_booke.c
@@ -412,3 +412,14 @@ notrace void __init kaslr_early_init(void *dt_ptr, 
phys_addr_t size)
  
  	reloc_kernel_entry(dt_ptr, kimage_vaddr);

  }
+
+void __init kaslr_second_init(void)
+{
+   /* If randomized, clear the original kernel */
+   if (kimage_vaddr != KERNELBASE) {
+   unsigned long kernel_sz;
+
+   kernel_sz = (unsigned long)_end - kimage_vaddr;
+   memset((void *)KERNELBASE, 0, kernel_sz);


Why are we clearing ? Is that just to tidy up or is it of security 
importance ?


If so, maybe memzero_explicit() should be used instead ?


+   }
+}
diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
index 754ae1e69f92..9912ee598f9b 100644
--- a/arch/powerpc/mm/mmu_decl.h
+++ b/arch/powerpc/mm/mmu_decl.h
@@ -150,8 +150,10 @@ extern void loadcam_multi(int first_idx, int num, int 
tmp_idx);
  
  #ifdef CONFIG_RANDOMIZE_BASE

  extern void kaslr_early_init(void *dt_ptr, phys_addr_t size);
+extern void kaslr_second_init(void);


No new 'extern' please.


  #else
  static inline void kaslr_early_init(void *dt_ptr, phys_addr_t size) {}
+static inline void kaslr_second_init(void) {}
  #endif
  
  struct tlbcam {

diff --git a/arch/powerpc/mm/nohash/fsl_booke.c 
b/arch/powerpc/mm/nohash/fsl_booke.c
index 8d25a8dc965f..fa5a87f5c08e 100644
--- a/arch/powerpc/mm/nohash/fsl_booke.c
+++ b/arch/powerpc/mm/nohash/fsl_booke.c
@@ -269,6 +269,7 @@ notrace void __init relocate_init(u64 dt_ptr, phys_addr_t 
start)
kernstart_addr = start;
if (is_second_reloc) {
virt_phys_offset = PAGE_OFFSET - memstart_addr;
+   kaslr_second_init();
return;
}
  



Christophe


Re: [RFC PATCH 06/10] powerpc/fsl_booke/32: implement KASLR infrastructure

2019-07-29 Thread Christophe Leroy




Le 17/07/2019 à 10:06, Jason Yan a écrit :

This patch add support to boot kernel from places other than KERNELBASE.
Since CONFIG_RELOCATABLE has already supported, what we need to do is
map or copy kernel to a proper place and relocate. Freescale Book-E
parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
entries are not suitable to map the kernel directly in a randomized
region, so we chose to copy the kernel to a proper place and restart to
relocate.

The offset of the kernel was not randomized yet(a fixed 64M is set). We
will randomize it in the next patch.

Signed-off-by: Jason Yan 
Cc: Diana Craciun 
Cc: Michael Ellerman 
Cc: Christophe Leroy 
Cc: Benjamin Herrenschmidt 
Cc: Paul Mackerras 
Cc: Nicholas Piggin 
Cc: Kees Cook 
---
  arch/powerpc/Kconfig  | 11 +++
  arch/powerpc/kernel/Makefile  |  1 +
  arch/powerpc/kernel/early_32.c|  2 +-
  arch/powerpc/kernel/fsl_booke_entry_mapping.S | 13 ++-
  arch/powerpc/kernel/head_fsl_booke.S  | 15 +++-
  arch/powerpc/kernel/kaslr_booke.c | 83 +++
  arch/powerpc/mm/mmu_decl.h|  6 ++
  arch/powerpc/mm/nohash/fsl_booke.c|  7 +-
  8 files changed, 125 insertions(+), 13 deletions(-)
  create mode 100644 arch/powerpc/kernel/kaslr_booke.c



[...]


diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
index dae8e9177574..754ae1e69f92 100644
--- a/arch/powerpc/mm/mmu_decl.h
+++ b/arch/powerpc/mm/mmu_decl.h
@@ -148,6 +148,12 @@ extern void reloc_kernel_entry(void *fdt, int addr);
  extern void loadcam_entry(unsigned int index);
  extern void loadcam_multi(int first_idx, int num, int tmp_idx);
  
+#ifdef CONFIG_RANDOMIZE_BASE

+extern void kaslr_early_init(void *dt_ptr, phys_addr_t size);


No superflous 'extern' keyword.

Christophe


+#else
+static inline void kaslr_early_init(void *dt_ptr, phys_addr_t size) {}
+#endif
+
  struct tlbcam {
u32 MAS0;
u32 MAS1;


Re: [RFC PATCH 05/10] powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper

2019-07-29 Thread Christophe Leroy




Le 17/07/2019 à 10:06, Jason Yan a écrit :

Add a new helper reloc_kernel_entry() to jump back to the start of the
new kernel. After we put the new kernel in a randomized place we can use
this new helper to enter the kernel and begin to relocate again.

Signed-off-by: Jason Yan 
Cc: Diana Craciun 
Cc: Michael Ellerman 
Cc: Christophe Leroy 
Cc: Benjamin Herrenschmidt 
Cc: Paul Mackerras 
Cc: Nicholas Piggin 
Cc: Kees Cook 
---
  arch/powerpc/kernel/head_fsl_booke.S | 16 
  arch/powerpc/mm/mmu_decl.h   |  1 +
  2 files changed, 17 insertions(+)

diff --git a/arch/powerpc/kernel/head_fsl_booke.S 
b/arch/powerpc/kernel/head_fsl_booke.S
index a57d44638031..ce40f96dae20 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -1144,6 +1144,22 @@ _GLOBAL(create_tlb_entry)
sync
blr
  
+/*

+ * Return to the start of the relocated kernel and run again
+ * r3 - virtual address of fdt
+ * r4 - entry of the kernel
+ */
+_GLOBAL(reloc_kernel_entry)
+   mfmsr   r7
+   li  r8,(MSR_IS | MSR_DS)
+   andcr7,r7,r8


Instead of the li/andc, what about the following:

rlwinm r7, r7, 0, ~(MSR_IS | MSR_DS)


+
+   mtspr   SPRN_SRR0,r4
+   mtspr   SPRN_SRR1,r7
+   isync
+   sync
+   rfi


Are the isync/sync really necessary ? AFAIK, rfi is context synchronising.


+
  /*
   * Create a tlb entry with the same effective and physical address as
   * the tlb entry used by the current running code. But set the TS to 1.
diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
index d7737cf97cee..dae8e9177574 100644
--- a/arch/powerpc/mm/mmu_decl.h
+++ b/arch/powerpc/mm/mmu_decl.h
@@ -143,6 +143,7 @@ extern void adjust_total_lowmem(void);
  extern int switch_to_as1(void);
  extern void restore_to_as0(int esel, int offset, void *dt_ptr, int bootcpu);
  extern void create_tlb_entry(phys_addr_t phys, unsigned long virt, int entry);
+extern void reloc_kernel_entry(void *fdt, int addr);


No new 'extern' please, see 
https://openpower.xyz/job/snowpatch/job/snowpatch-linux-checkpatch/8125//artifact/linux/checkpatch.log




  #endif
  extern void loadcam_entry(unsigned int index);
  extern void loadcam_multi(int first_idx, int num, int tmp_idx);



Christophe


Re: [RFC PATCH 04/10] powerpc/fsl_booke/32: introduce create_tlb_entry() helper

2019-07-29 Thread Christophe Leroy




Le 17/07/2019 à 10:06, Jason Yan a écrit :

Add a new helper create_tlb_entry() to create a tlb entry by the virtual
and physical address. This is a preparation to support boot kernel at a
randomized address.

Signed-off-by: Jason Yan 
Cc: Diana Craciun 
Cc: Michael Ellerman 
Cc: Christophe Leroy 
Cc: Benjamin Herrenschmidt 
Cc: Paul Mackerras 
Cc: Nicholas Piggin 
Cc: Kees Cook 
---
  arch/powerpc/kernel/head_fsl_booke.S | 30 
  arch/powerpc/mm/mmu_decl.h   |  1 +
  2 files changed, 31 insertions(+)

diff --git a/arch/powerpc/kernel/head_fsl_booke.S 
b/arch/powerpc/kernel/head_fsl_booke.S
index adf0505dbe02..a57d44638031 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -1114,6 +1114,36 @@ __secondary_hold_acknowledge:
.long   -1
  #endif
  
+/*

+ * Create a 64M tlb by address and entry
+ * r3/r4 - physical address
+ * r5 - virtual address
+ * r6 - entry
+ */
+_GLOBAL(create_tlb_entry)
+   lis r7,0x1000   /* Set MAS0(TLBSEL) = 1 */
+   rlwimi  r7,r6,16,4,15   /* Setup MAS0 = TLBSEL | ESEL(r6) */
+   mtspr   SPRN_MAS0,r7/* Write MAS0 */
+
+   lis r6,(MAS1_VALID|MAS1_IPROT)@h
+   ori r6,r6,(MAS1_TSIZE(BOOK3E_PAGESZ_64M))@l
+   mtspr   SPRN_MAS1,r6/* Write MAS1 */
+
+   lis r6,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@h
+   ori r6,r6,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@l
+   and r6,r6,r5
+   ori r6,r6,MAS2_M@l
+   mtspr   SPRN_MAS2,r6/* Write MAS2(EPN) */
+
+   mr  r8,r4
+   ori r8,r8,(MAS3_SW|MAS3_SR|MAS3_SX)


Could drop the mr r8, r4 and do:

ori r8,r4,(MAS3_SW|MAS3_SR|MAS3_SX)


+   mtspr   SPRN_MAS3,r8/* Write MAS3(RPN) */
+
+   tlbwe   /* Write TLB */
+   isync
+   sync
+   blr
+
  /*
   * Create a tlb entry with the same effective and physical address as
   * the tlb entry used by the current running code. But set the TS to 1.
diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
index 32c1a191c28a..d7737cf97cee 100644
--- a/arch/powerpc/mm/mmu_decl.h
+++ b/arch/powerpc/mm/mmu_decl.h
@@ -142,6 +142,7 @@ extern unsigned long calc_cam_sz(unsigned long ram, 
unsigned long virt,
  extern void adjust_total_lowmem(void);
  extern int switch_to_as1(void);
  extern void restore_to_as0(int esel, int offset, void *dt_ptr, int bootcpu);
+extern void create_tlb_entry(phys_addr_t phys, unsigned long virt, int entry);


Please please do not add new declarations with the useless 'extern' 
keyword. See checkpatch report: 
https://openpower.xyz/job/snowpatch/job/snowpatch-linux-checkpatch/8124//artifact/linux/checkpatch.log



  #endif
  extern void loadcam_entry(unsigned int index);
  extern void loadcam_multi(int first_idx, int num, int tmp_idx);



Re: [RFC PATCH 03/10] powerpc: introduce kimage_vaddr to store the kernel base

2019-07-29 Thread Christophe Leroy




Le 17/07/2019 à 10:06, Jason Yan a écrit :

Now the kernel base is a fixed value - KERNELBASE. To support KASLR, we
need a variable to store the kernel base.

Signed-off-by: Jason Yan 
Cc: Diana Craciun 
Cc: Michael Ellerman 
Cc: Christophe Leroy 
Cc: Benjamin Herrenschmidt 
Cc: Paul Mackerras 
Cc: Nicholas Piggin 
Cc: Kees Cook 


Reviewed-by: Christophe Leroy 



---
  arch/powerpc/include/asm/page.h | 2 ++
  arch/powerpc/mm/init-common.c   | 2 ++
  2 files changed, 4 insertions(+)

diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
index 0d52f57fca04..60a68d3a54b1 100644
--- a/arch/powerpc/include/asm/page.h
+++ b/arch/powerpc/include/asm/page.h
@@ -315,6 +315,8 @@ void arch_free_page(struct page *page, int order);
  
  struct vm_area_struct;
  
+extern unsigned long kimage_vaddr;

+
  #include 
  #endif /* __ASSEMBLY__ */
  #include 
diff --git a/arch/powerpc/mm/init-common.c b/arch/powerpc/mm/init-common.c
index 9273c38009cb..c7a98c73e5c1 100644
--- a/arch/powerpc/mm/init-common.c
+++ b/arch/powerpc/mm/init-common.c
@@ -25,6 +25,8 @@ phys_addr_t memstart_addr = (phys_addr_t)~0ull;
  EXPORT_SYMBOL(memstart_addr);
  phys_addr_t kernstart_addr;
  EXPORT_SYMBOL(kernstart_addr);
+unsigned long kimage_vaddr = KERNELBASE;
+EXPORT_SYMBOL(kimage_vaddr);
  
  static bool disable_kuep = !IS_ENABLED(CONFIG_PPC_KUEP);

  static bool disable_kuap = !IS_ENABLED(CONFIG_PPC_KUAP);



Re: [RFC PATCH 02/10] powerpc: move memstart_addr and kernstart_addr to init-common.c

2019-07-29 Thread Christophe Leroy




Le 17/07/2019 à 10:06, Jason Yan a écrit :

These two variables are both defined in init_32.c and init_64.c. Move
them to init-common.c.

Signed-off-by: Jason Yan 
Cc: Diana Craciun 
Cc: Michael Ellerman 
Cc: Christophe Leroy 
Cc: Benjamin Herrenschmidt 
Cc: Paul Mackerras 
Cc: Nicholas Piggin 
Cc: Kees Cook 


Reviewed-by: Christophe Leroy 



---
  arch/powerpc/mm/init-common.c | 5 +
  arch/powerpc/mm/init_32.c | 5 -
  arch/powerpc/mm/init_64.c | 5 -
  3 files changed, 5 insertions(+), 10 deletions(-)

diff --git a/arch/powerpc/mm/init-common.c b/arch/powerpc/mm/init-common.c
index a84da92920f7..9273c38009cb 100644
--- a/arch/powerpc/mm/init-common.c
+++ b/arch/powerpc/mm/init-common.c
@@ -21,6 +21,11 @@
  #include 
  #include 
  
+phys_addr_t memstart_addr = (phys_addr_t)~0ull;

+EXPORT_SYMBOL(memstart_addr);
+phys_addr_t kernstart_addr;
+EXPORT_SYMBOL(kernstart_addr);
+
  static bool disable_kuep = !IS_ENABLED(CONFIG_PPC_KUEP);
  static bool disable_kuap = !IS_ENABLED(CONFIG_PPC_KUAP);
  
diff --git a/arch/powerpc/mm/init_32.c b/arch/powerpc/mm/init_32.c

index b04896a88d79..872df48ae41b 100644
--- a/arch/powerpc/mm/init_32.c
+++ b/arch/powerpc/mm/init_32.c
@@ -56,11 +56,6 @@
  phys_addr_t total_memory;
  phys_addr_t total_lowmem;
  
-phys_addr_t memstart_addr = (phys_addr_t)~0ull;

-EXPORT_SYMBOL(memstart_addr);
-phys_addr_t kernstart_addr;
-EXPORT_SYMBOL(kernstart_addr);
-
  #ifdef CONFIG_RELOCATABLE
  /* Used in __va()/__pa() */
  long long virt_phys_offset;
diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
index a44f6281ca3a..c836f1269ee7 100644
--- a/arch/powerpc/mm/init_64.c
+++ b/arch/powerpc/mm/init_64.c
@@ -63,11 +63,6 @@
  
  #include 
  
-phys_addr_t memstart_addr = ~0;

-EXPORT_SYMBOL_GPL(memstart_addr);
-phys_addr_t kernstart_addr;
-EXPORT_SYMBOL_GPL(kernstart_addr);
-
  #ifdef CONFIG_SPARSEMEM_VMEMMAP
  /*
   * Given an address within the vmemmap, determine the pfn of the page that



Re: [RFC PATCH 01/10] powerpc: unify definition of M_IF_NEEDED

2019-07-29 Thread Christophe Leroy




Le 17/07/2019 à 10:06, Jason Yan a écrit :

M_IF_NEEDED is defined too many times. Move it to a common place.

Signed-off-by: Jason Yan 
Cc: Diana Craciun 
Cc: Michael Ellerman 
Cc: Christophe Leroy 
Cc: Benjamin Herrenschmidt 
Cc: Paul Mackerras 
Cc: Nicholas Piggin 
Cc: Kees Cook 


Reviewed-by: Christophe Leroy 


---
  arch/powerpc/include/asm/nohash/mmu-book3e.h  | 10 ++
  arch/powerpc/kernel/exceptions-64e.S  | 10 --
  arch/powerpc/kernel/fsl_booke_entry_mapping.S | 10 --
  arch/powerpc/kernel/misc_64.S |  5 -
  4 files changed, 10 insertions(+), 25 deletions(-)

diff --git a/arch/powerpc/include/asm/nohash/mmu-book3e.h 
b/arch/powerpc/include/asm/nohash/mmu-book3e.h
index 4c9777d256fb..0877362e48fa 100644
--- a/arch/powerpc/include/asm/nohash/mmu-book3e.h
+++ b/arch/powerpc/include/asm/nohash/mmu-book3e.h
@@ -221,6 +221,16 @@
  #define TLBILX_T_CLASS2   6
  #define TLBILX_T_CLASS3   7
  
+/*

+ * The mapping only needs to be cache-coherent on SMP, except on
+ * Freescale e500mc derivatives where it's also needed for coherent DMA.
+ */
+#if defined(CONFIG_SMP) || defined(CONFIG_PPC_E500MC)
+#define M_IF_NEEDEDMAS2_M
+#else
+#define M_IF_NEEDED0
+#endif
+
  #ifndef __ASSEMBLY__
  #include 
  
diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S

index 1cfb3da4a84a..fd49ec07ce4a 100644
--- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -1342,16 +1342,6 @@ skpinv:  addir6,r6,1 /* 
Increment */
sync
isync
  
-/*

- * The mapping only needs to be cache-coherent on SMP, except on
- * Freescale e500mc derivatives where it's also needed for coherent DMA.
- */
-#if defined(CONFIG_SMP) || defined(CONFIG_PPC_E500MC)
-#define M_IF_NEEDEDMAS2_M
-#else
-#define M_IF_NEEDED0
-#endif
-
  /* 6. Setup KERNELBASE mapping in TLB[0]
   *
   * r3 = MAS0 w/TLBSEL & ESEL for the entry we started in
diff --git a/arch/powerpc/kernel/fsl_booke_entry_mapping.S 
b/arch/powerpc/kernel/fsl_booke_entry_mapping.S
index ea065282b303..de0980945510 100644
--- a/arch/powerpc/kernel/fsl_booke_entry_mapping.S
+++ b/arch/powerpc/kernel/fsl_booke_entry_mapping.S
@@ -153,16 +153,6 @@ skpinv:addir6,r6,1 /* 
Increment */
tlbivax 0,r9
TLBSYNC
  
-/*

- * The mapping only needs to be cache-coherent on SMP, except on
- * Freescale e500mc derivatives where it's also needed for coherent DMA.
- */
-#if defined(CONFIG_SMP) || defined(CONFIG_PPC_E500MC)
-#define M_IF_NEEDEDMAS2_M
-#else
-#define M_IF_NEEDED0
-#endif
-
  #if defined(ENTRY_MAPPING_BOOT_SETUP)
  
  /* 6. Setup KERNELBASE mapping in TLB1[0] */

diff --git a/arch/powerpc/kernel/misc_64.S b/arch/powerpc/kernel/misc_64.S
index b55a7b4cb543..26074f92d4bc 100644
--- a/arch/powerpc/kernel/misc_64.S
+++ b/arch/powerpc/kernel/misc_64.S
@@ -432,11 +432,6 @@ kexec_create_tlb:
rlwimi  r9,r10,16,4,15  /* Setup MAS0 = TLBSEL | ESEL(r9) */
  
  /* Set up a temp identity mapping v:0 to p:0 and return to it. */

-#if defined(CONFIG_SMP) || defined(CONFIG_PPC_E500MC)
-#define M_IF_NEEDEDMAS2_M
-#else
-#define M_IF_NEEDED0
-#endif
mtspr   SPRN_MAS0,r9
  
  	lis	r9,(MAS1_VALID|MAS1_IPROT)@h




[PATCH v2 4/5] powerpc/PCI: Remove HAVE_ARCH_PCI_RESOURCE_TO_USER

2019-07-29 Thread Denis Efremov
The function pci_resource_to_user() was turned to a weak one. Thus,
powerpc-specific version will automatically override the generic one
and the HAVE_ARCH_PCI_RESOURCE_TO_USER macro should be removed.

Signed-off-by: Denis Efremov 
---
 arch/powerpc/include/asm/pci.h | 2 --
 1 file changed, 2 deletions(-)

diff --git a/arch/powerpc/include/asm/pci.h b/arch/powerpc/include/asm/pci.h
index 2372d35533ad..327567b8f7d6 100644
--- a/arch/powerpc/include/asm/pci.h
+++ b/arch/powerpc/include/asm/pci.h
@@ -112,8 +112,6 @@ extern pgprot_t pci_phys_mem_access_prot(struct file 
*file,
 unsigned long size,
 pgprot_t prot);
 
-#define HAVE_ARCH_PCI_RESOURCE_TO_USER
-
 extern resource_size_t pcibios_io_space_offset(struct pci_controller *hose);
 extern void pcibios_setup_bus_devices(struct pci_bus *bus);
 extern void pcibios_setup_bus_self(struct pci_bus *bus);
-- 
2.21.0



[PATCH v2 1/5] PCI: Convert pci_resource_to_user to a weak function

2019-07-29 Thread Denis Efremov
The patch turns pci_resource_to_user() to a weak function. Thus,
architecture-specific versions will automatically override the generic
one. This allows to remove the HAVE_ARCH_PCI_RESOURCE_TO_USER macro and
avoid the conditional compilation for this single function.

Signed-off-by: Denis Efremov 
---
 drivers/pci/pci.c   |  8 
 include/linux/pci.h | 12 
 2 files changed, 8 insertions(+), 12 deletions(-)

diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
index 29ed5ec1ac27..f9dc7563a8b9 100644
--- a/drivers/pci/pci.c
+++ b/drivers/pci/pci.c
@@ -5932,6 +5932,14 @@ resource_size_t __weak pcibios_default_alignment(void)
return 0;
 }
 
+void __weak pci_resource_to_user(const struct pci_dev *dev, int bar,
+   const struct resource *rsrc, resource_size_t *start,
+   resource_size_t *end)
+{
+   *start = rsrc->start;
+   *end = rsrc->end;
+}
+
 #define RESOURCE_ALIGNMENT_PARAM_SIZE COMMAND_LINE_SIZE
 static char resource_alignment_param[RESOURCE_ALIGNMENT_PARAM_SIZE] = {0};
 static DEFINE_SPINLOCK(resource_alignment_lock);
diff --git a/include/linux/pci.h b/include/linux/pci.h
index 9e700d9f9f28..dbdfdab1027b 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -1870,25 +1870,13 @@ static inline const char *pci_name(const struct pci_dev 
*pdev)
return dev_name(>dev);
 }
 
-
 /*
  * Some archs don't want to expose struct resource to userland as-is
  * in sysfs and /proc
  */
-#ifdef HAVE_ARCH_PCI_RESOURCE_TO_USER
 void pci_resource_to_user(const struct pci_dev *dev, int bar,
  const struct resource *rsrc,
  resource_size_t *start, resource_size_t *end);
-#else
-static inline void pci_resource_to_user(const struct pci_dev *dev, int bar,
-   const struct resource *rsrc, resource_size_t *start,
-   resource_size_t *end)
-{
-   *start = rsrc->start;
-   *end = rsrc->end;
-}
-#endif /* HAVE_ARCH_PCI_RESOURCE_TO_USER */
-
 
 /*
  * The world is not perfect and supplies us with broken PCI devices.
-- 
2.21.0



[PATCH v2 0/5] PCI: Convert pci_resource_to_user() to a weak function

2019-07-29 Thread Denis Efremov
Architectures currently define HAVE_ARCH_PCI_RESOURCE_TO_USER if they want
to provide their own pci_resource_to_user() implementation. This could be
simplified if we make the generic version a weak function. Thus,
architecture specific versions will automatically override the generic one.

Changes in v2:
1. Removed __weak from pci_resource_to_user() declaration
2. Fixed typo s/spark/sparc/g

Denis Efremov (5):
  PCI: Convert pci_resource_to_user to a weak function
  microblaze/PCI: Remove HAVE_ARCH_PCI_RESOURCE_TO_USER
  mips/PCI: Remove HAVE_ARCH_PCI_RESOURCE_TO_USER
  powerpc/PCI: Remove HAVE_ARCH_PCI_RESOURCE_TO_USER
  sparc/PCI: Remove HAVE_ARCH_PCI_RESOURCE_TO_USER

 arch/microblaze/include/asm/pci.h |  2 --
 arch/mips/include/asm/pci.h   |  1 -
 arch/powerpc/include/asm/pci.h|  2 --
 arch/sparc/include/asm/pci.h  |  2 --
 drivers/pci/pci.c |  8 
 include/linux/pci.h   | 12 
 6 files changed, 8 insertions(+), 19 deletions(-)

-- 
2.21.0



Re: [EXTERNAL][PATCH 1/5] PCI: Convert pci_resource_to_user to a weak function

2019-07-29 Thread Denis Efremov

Hi Paul,

On 29.07.2019 01:49, Paul Burton wrote:

Hi Denis,

This is wrong - using __weak on the declaration in a header will cause
the weak attribute to be applied to all implementations too (presuming
the C files containing the implementations include the header). You then
get whichever impleentation the linker chooses, which isn't necessarily
the one you wanted.


Thank you for pointing me on that. I will prepare the v2.


Re: [PATCH 4/5] dma-mapping: provide a better default ->get_required_mask

2019-07-29 Thread Geert Uytterhoeven
Hi Christoph,

On Thu, Jul 25, 2019 at 8:35 AM Christoph Hellwig  wrote:
> Most dma_map_ops instances are IOMMUs that work perfectly fine in 32-bits
> of IOVA space, and the generic direct mapping code already provides its
> own routines that is intelligent based on the amount of memory actually
> present.  Wire up the dma-direct routine for the ARM direct mapping code
> as well, and otherwise default to the constant 32-bit mask.  This way
> we only need to override it for the occasional odd IOMMU that requires
> 64-bit IOVA support, or IOMMU drivers that are more efficient if they
> can fall back to the direct mapping.

As I know you like diving into cans of worms ;-)

Does 64-bit IOVA support actually work in general? Or only on 64-bit
platforms, due to dma_addr_t to unsigned long truncation on 32-bit?

https://lore.kernel.org/linux-renesas-soc/camuhmdwkq918y61tmjbheu29agleynwbvzbsbb-rrh7yyun...@mail.gmail.com/

Gr{oetje,eeting}s,

Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- ge...@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
-- Linus Torvalds


[PATCH v3] powerpc/nvdimm: Pick nearby online node if the device node is not online

2019-07-29 Thread Aneesh Kumar K.V
Currently, nvdimm subsystem expects the device numa node for SCM device to be
an online node. It also doesn't try to bring the device numa node online. Hence
if we use a non-online numa node as device node we hit crashes like below. This
is because we try to access uninitialized NODE_DATA in different code paths.

cpu 0x0: Vector: 300 (Data Access) at [c000fac53170]
pc: c04bbc50: ___slab_alloc+0x120/0xca0
lr: c04bc834: __slab_alloc+0x64/0xc0
sp: c000fac53400
   msr: 82009033
   dar: 73e8
 dsisr: 8
  current = 0xc000fabb6d80
  paca= 0xc387   irqmask: 0x03   irq_happened: 0x01
pid   = 7, comm = kworker/u16:0
Linux version 5.2.0-06234-g76bd729b2644 (kvaneesh@ltc-boston123) (gcc version 
7.4.0 (Ubuntu 7.4.0-1ubuntu1~18.04.1)) #135 SMP Thu Jul 11 05:36:30 CDT 2019
enter ? for help
[link register   ] c04bc834 __slab_alloc+0x64/0xc0
[c000fac53400] c000fac53480 (unreliable)
[c000fac53500] c04bc818 __slab_alloc+0x48/0xc0
[c000fac53560] c04c30a0 __kmalloc_node_track_caller+0x3c0/0x6b0
[c000fac535d0] c0cfafe4 devm_kmalloc+0x74/0xc0
[c000fac53600] c0d69434 nd_region_activate+0x144/0x560
[c000fac536d0] c0d6b19c nd_region_probe+0x17c/0x370
[c000fac537b0] c0d6349c nvdimm_bus_probe+0x10c/0x230
[c000fac53840] c0cf3cc4 really_probe+0x254/0x4e0
[c000fac538d0] c0cf429c driver_probe_device+0x16c/0x1e0
[c000fac53950] c0cf0b44 bus_for_each_drv+0x94/0x130
[c000fac539b0] c0cf392c __device_attach+0xdc/0x200
[c000fac53a50] c0cf231c bus_probe_device+0x4c/0xf0
[c000fac53a90] c0ced268 device_add+0x528/0x810
[c000fac53b60] c0d62a58 nd_async_device_register+0x28/0xa0
[c000fac53bd0] c01ccb8c async_run_entry_fn+0xcc/0x1f0
[c000fac53c50] c01bcd9c process_one_work+0x46c/0x860
[c000fac53d20] c01bd4f4 worker_thread+0x364/0x5f0
[c000fac53db0] c01c7260 kthread+0x1b0/0x1c0
[c000fac53e20] c000b954 ret_from_kernel_thread+0x5c/0x68

The patch tries to fix this by picking the nearest online node as the SCM node.
This does have a problem of us losing the information that SCM node is
equidistant from two other online nodes. If applications need to understand 
these
fine-grained details we should express then like x86 does via
/sys/devices/system/node/nodeX/accessY/initiators/

With the patch we get

 # numactl -H
available: 2 nodes (0-1)
node 0 cpus:
node 0 size: 0 MB
node 0 free: 0 MB
node 1 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 
25 26 27 28 29 30 31
node 1 size: 130865 MB
node 1 free: 129130 MB
node distances:
node   0   1
  0:  10  20
  1:  20  10
 # cat /sys/bus/nd/devices/region0/numa_node
0
 # dmesg | grep papr_scm
[   91.332305] papr_scm ibm,persistent-memory:ibm,pmemory@44104001: Region 
registered with target node 2 and online node 0

Signed-off-by: Aneesh Kumar K.V 
---
Changes from V2:
* Update commit message
* Don't update platform device numa node

Changes from V1:
* handle NUMA_NO_NODE

 arch/powerpc/platforms/pseries/papr_scm.c | 29 +--
 1 file changed, 27 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/platforms/pseries/papr_scm.c 
b/arch/powerpc/platforms/pseries/papr_scm.c
index 2c07908359b2..a5ac371a3f06 100644
--- a/arch/powerpc/platforms/pseries/papr_scm.c
+++ b/arch/powerpc/platforms/pseries/papr_scm.c
@@ -275,12 +275,32 @@ static const struct attribute_group 
*papr_scm_dimm_groups[] = {
NULL,
 };
 
+static inline int papr_scm_node(int node)
+{
+   int min_dist = INT_MAX, dist;
+   int nid, min_node;
+
+   if ((node == NUMA_NO_NODE) || node_online(node))
+   return node;
+
+   min_node = first_online_node;
+   for_each_online_node(nid) {
+   dist = node_distance(node, nid);
+   if (dist < min_dist) {
+   min_dist = dist;
+   min_node = nid;
+   }
+   }
+   return min_node;
+}
+
 static int papr_scm_nvdimm_init(struct papr_scm_priv *p)
 {
struct device *dev = >pdev->dev;
struct nd_mapping_desc mapping;
struct nd_region_desc ndr_desc;
unsigned long dimm_flags;
+   int target_nid, online_nid;
 
p->bus_desc.ndctl = papr_scm_ndctl;
p->bus_desc.module = THIS_MODULE;
@@ -319,8 +339,10 @@ static int papr_scm_nvdimm_init(struct papr_scm_priv *p)
 
memset(_desc, 0, sizeof(ndr_desc));
ndr_desc.attr_groups = region_attr_groups;
-   ndr_desc.numa_node = dev_to_node(>pdev->dev);
-   ndr_desc.target_node = ndr_desc.numa_node;
+   target_nid = dev_to_node(>pdev->dev);
+   online_nid = papr_scm_node(target_nid);
+   ndr_desc.numa_node = online_nid;
+   ndr_desc.target_node = target_nid;
ndr_desc.res = >res;
ndr_desc.of_node = p->dn;

Re: [PATCH] Fix typo reigster to register

2019-07-29 Thread Liviu Dudau
Hi Pei,

On Sat, Jul 27, 2019 at 10:21:09PM +0800, Pei Hsuan Hung wrote:
> Signed-off-by: Pei Hsuan Hung 
> Cc: triv...@kernel.org
> ---
>  arch/powerpc/kernel/eeh.c   | 2 +-
>  arch/powerpc/platforms/cell/spufs/switch.c  | 4 ++--
>  drivers/extcon/extcon-rt8973a.c | 2 +-
>  drivers/gpu/drm/arm/malidp_regs.h   | 2 +-
>  drivers/net/wireless/realtek/rtlwifi/rtl8192se/fw.h | 2 +-
>  drivers/scsi/lpfc/lpfc_hbadisc.c| 4 ++--
>  fs/userfaultfd.c| 2 +-
>  7 files changed, 9 insertions(+), 9 deletions(-)
> 
> diff --git a/arch/powerpc/kernel/eeh.c b/arch/powerpc/kernel/eeh.c
> index c0e4b73191f3..d75c9c24ec4d 100644
> --- a/arch/powerpc/kernel/eeh.c
> +++ b/arch/powerpc/kernel/eeh.c
> @@ -1030,7 +1030,7 @@ int __init eeh_ops_register(struct eeh_ops *ops)
>  }
>  
>  /**
> - * eeh_ops_unregister - Unreigster platform dependent EEH operations
> + * eeh_ops_unregister - Unregister platform dependent EEH operations
>   * @name: name of EEH platform operations
>   *
>   * Unregister the platform dependent EEH operation callback
> diff --git a/arch/powerpc/platforms/cell/spufs/switch.c 
> b/arch/powerpc/platforms/cell/spufs/switch.c
> index 5c3f5d088c3b..9548a086937b 100644
> --- a/arch/powerpc/platforms/cell/spufs/switch.c
> +++ b/arch/powerpc/platforms/cell/spufs/switch.c
> @@ -574,7 +574,7 @@ static inline void save_mfc_rag(struct spu_state *csa, 
> struct spu *spu)
>  {
>   /* Save, Step 38:
>* Save RA_GROUP_ID register and the
> -  * RA_ENABLE reigster in the CSA.
> +  * RA_ENABLE register in the CSA.
>*/
>   csa->priv1.resource_allocation_groupID_RW =
>   spu_resource_allocation_groupID_get(spu);
> @@ -1227,7 +1227,7 @@ static inline void restore_mfc_rag(struct spu_state 
> *csa, struct spu *spu)
>  {
>   /* Restore, Step 29:
>* Restore RA_GROUP_ID register and the
> -  * RA_ENABLE reigster from the CSA.
> +  * RA_ENABLE register from the CSA.
>*/
>   spu_resource_allocation_groupID_set(spu,
>   csa->priv1.resource_allocation_groupID_RW);
> diff --git a/drivers/extcon/extcon-rt8973a.c b/drivers/extcon/extcon-rt8973a.c
> index 40c07f4d656e..e75c03792398 100644
> --- a/drivers/extcon/extcon-rt8973a.c
> +++ b/drivers/extcon/extcon-rt8973a.c
> @@ -270,7 +270,7 @@ static int rt8973a_muic_get_cable_type(struct 
> rt8973a_muic_info *info)
>   }
>   cable_type = adc & RT8973A_REG_ADC_MASK;
>  
> - /* Read Device 1 reigster to identify correct cable type */
> + /* Read Device 1 register to identify correct cable type */
>   ret = regmap_read(info->regmap, RT8973A_REG_DEV1, );
>   if (ret) {
>   dev_err(info->dev, "failed to read DEV1 register\n");
> diff --git a/drivers/gpu/drm/arm/malidp_regs.h 
> b/drivers/gpu/drm/arm/malidp_regs.h
> index 993031542fa1..0d81b34a4212 100644
> --- a/drivers/gpu/drm/arm/malidp_regs.h
> +++ b/drivers/gpu/drm/arm/malidp_regs.h
> @@ -145,7 +145,7 @@
>  #define MALIDP_SE_COEFFTAB_DATA_MASK 0x3fff
>  #define MALIDP_SE_SET_COEFFTAB_DATA(x) \
>   ((x) & MALIDP_SE_COEFFTAB_DATA_MASK)
> -/* Enhance coeffents reigster offset */
> +/* Enhance coeffents register offset */

Unless this patch was generated by a script I think it is worth correcting the
other spelling mistake on that line as well: coefficients rather than coeffents.

With that: Acked-by: Liviu Dudau 

Best regards,
Liviu

>  #define MALIDP_SE_IMAGE_ENH  0x3C
>  /* ENH_LIMITS offset 0x0 */
>  #define MALIDP_SE_ENH_LOW_LEVEL  24
> diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192se/fw.h 
> b/drivers/net/wireless/realtek/rtlwifi/rtl8192se/fw.h
> index 99c6f7eefd85..d03c8f12a15c 100644
> --- a/drivers/net/wireless/realtek/rtlwifi/rtl8192se/fw.h
> +++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192se/fw.h
> @@ -58,7 +58,7 @@ struct fw_priv {
>   /* 0x81: PCI-AP, 01:PCIe, 02: 92S-U,
>* 0x82: USB-AP, 0x12: 72S-U, 03:SDIO */
>   u8 hci_sel;
> - /* the same value as reigster value  */
> + /* the same value as register value  */
>   u8 chip_version;
>   /* customer  ID low byte */
>   u8 customer_id_0;
> diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c 
> b/drivers/scsi/lpfc/lpfc_hbadisc.c
> index 28ecaa7fc715..9e116bd79836 100644
> --- a/drivers/scsi/lpfc/lpfc_hbadisc.c
> +++ b/drivers/scsi/lpfc/lpfc_hbadisc.c
> @@ -6551,7 +6551,7 @@ lpfc_sli4_unregister_fcf(struct lpfc_hba *phba)
>   * lpfc_unregister_fcf_rescan - Unregister currently registered fcf and 
> rescan
>   * @phba: Pointer to hba context object.
>   *
> - * This function unregisters the currently reigstered FCF. This function
> + * This function unregisters the currently registered FCF. This function
>   * also tries to find another FCF for discovery by rescan the HBA FCF table.
>   */
>  void
> @@ -6609,7 +6609,7 @@ 

Re: [PATCH] powerpc/kvm: Fall through switch case explicitly

2019-07-29 Thread Stephen Rothwell
Hi Santosh,

On Mon, 29 Jul 2019 11:25:36 +0530 Santosh Sivaraj  wrote:
>
> Implicit fallthrough warning was enabled globally which broke
> the build. Make it explicit with a `fall through` comment.
> 
> Signed-off-by: Santosh Sivaraj 
> ---
>  arch/powerpc/kvm/book3s_32_mmu.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/arch/powerpc/kvm/book3s_32_mmu.c 
> b/arch/powerpc/kvm/book3s_32_mmu.c
> index 653936177857..18f244aad7aa 100644
> --- a/arch/powerpc/kvm/book3s_32_mmu.c
> +++ b/arch/powerpc/kvm/book3s_32_mmu.c
> @@ -239,6 +239,7 @@ static int kvmppc_mmu_book3s_32_xlate_pte(struct kvm_vcpu 
> *vcpu, gva_t eaddr,
>   case 2:
>   case 6:
>   pte->may_write = true;
> + /* fall through */
>   case 3:
>   case 5:
>   case 7:
> -- 
> 2.20.1
> 

Thanks

Reviewed-by: Stephen Rothwell 

This only shows up as a warning in a powerpc allyesconfig build.
-- 
Cheers,
Stephen Rothwell


pgpbXnSZyNvg8.pgp
Description: OpenPGP digital signature


Re: [RFC PATCH v2] powerpc/xmon: restrict when kernel is locked down

2019-07-29 Thread Daniel Axtens
Hi Chris,

 Remind me again why we need to clear breakpoints in integrity mode?
...
>> Integrity mode merely means we are aiming to prevent modifications to 
>> kernel memory. IMHO leaving existing breakpoints in place is fine as 
>> long as when we hit the breakpoint xmon is in read-only mode.
>>
...
> I think ajd is right. 
>
> I think about it like this. There are 2 transitions:
>
>  - into integrity mode
>
>Here, we need to go into r/o, but do not need to clear breakpoints.
>You can still insert breakpoints in readonly mode, so clearing them
>just makes things more irritating rather than safer.
>
>  - into confidentiality mode
>
>Here we need to purge breakpoints and disable xmon completely.

Would you be able to send a v2 with these changes? (that is, not purging
breakpoints when entering integrity mode)

Regards,
Daniel


Re: [EXTERNAL][PATCH 1/5] PCI: Convert pci_resource_to_user to a weak function

2019-07-29 Thread Joe Perches
On Sun, 2019-07-28 at 22:49 +, Paul Burton wrote:
> Hi Denis,
> 
> On Sun, Jul 28, 2019 at 11:22:09PM +0300, Denis Efremov wrote:
> > diff --git a/include/linux/pci.h b/include/linux/pci.h
> > index 9e700d9f9f28..1a19d0151b0a 100644
> > --- a/include/linux/pci.h
> > +++ b/include/linux/pci.h
> > @@ -1870,25 +1870,13 @@ static inline const char *pci_name(const struct 
> > pci_dev *pdev)
> > return dev_name(>dev);
> >  }
> >  
> > -
> >  /*
> >   * Some archs don't want to expose struct resource to userland as-is
> >   * in sysfs and /proc
> >   */
> > -#ifdef HAVE_ARCH_PCI_RESOURCE_TO_USER
> > -void pci_resource_to_user(const struct pci_dev *dev, int bar,
> > - const struct resource *rsrc,
> > - resource_size_t *start, resource_size_t *end);
> > -#else
> > -static inline void pci_resource_to_user(const struct pci_dev *dev, int bar,
> > -   const struct resource *rsrc, resource_size_t *start,
> > -   resource_size_t *end)
> > -{
> > -   *start = rsrc->start;
> > -   *end = rsrc->end;
> > -}
> > -#endif /* HAVE_ARCH_PCI_RESOURCE_TO_USER */
> > -
> > +void __weak pci_resource_to_user(const struct pci_dev *dev, int bar,
> > +const struct resource *rsrc,
> > +resource_size_t *start, resource_size_t *end);
> >  
> >  /*
> >   * The world is not perfect and supplies us with broken PCI devices.
> 
> This is wrong - using __weak on the declaration in a header will cause
> the weak attribute to be applied to all implementations too (presuming
> the C files containing the implementations include the header). You then
> get whichever impleentation the linker chooses, which isn't necessarily
> the one you wanted.
> 
> checkpatch.pl should produce an error about this - see the
> WEAK_DECLARATION error introduced in commit 619a908aa334 ("checkpatch:
> add error on use of attribute((weak)) or __weak declarations").

Unfortunately, checkpatch is pretty stupid and only emits
this on single line declarations.