Re: [PATCH 1/3] perf tests test_arm_coresight: Fix the shellcheck warning in latest test_arm_coresight.sh

2023-10-12 Thread Suzuki K Poulose

Hi,

On 12/10/2023 16:56, Athira Rajeev wrote:




On 05-Oct-2023, at 3:06 PM, Suzuki K Poulose  wrote:

On 05/10/2023 06:02, Namhyung Kim wrote:

On Thu, Sep 28, 2023 at 9:11 PM Athira Rajeev
 wrote:




...


Thanks for the fix.

Nothing to do with this patch, but I am wondering if the original patch
is over engineered and may not be future proof.

e.g.,

cs_etm_dev_name() {
+ cs_etm_path=$(find  /sys/bus/event_source/devices/cs_etm/ -name cpu* -print 
-quit)

Right there you got the device name and we can easily deduce the name of
the "ETM" node.

e.g,:
etm=$(basename $(readlink cs_etm_path) | sed "s/[0-9]\+$//")

And practically, nobody prevents an ETE mixed with an ETM on a "hybrid"
system (hopefully, no one builds it ;-))

Also, instead of hardcoding "ete" and "etm" prefixes from the arch part,
we should simply use the cpu nodes from :

/sys/bus/event_source/devices/cs_etm/

e.g.,

arm_cs_etm_traverse_path_test() {
# Iterate for every ETM device
for c in /sys/bus/event_source/devices/cs_etm/cpu*; do
# Read the link to be on the safer side
dev=`readlink $c`

# Find the ETM device belonging to which CPU
cpu=`cat $dev/cpu`

# Use depth-first search (DFS) to iterate outputs
arm_cs_iterate_devices $dev $cpu
done;
}




You'd better add Coresight folks on this.
Maybe this file was missing in the MAINTAINERS file.


And the original author of the commit, that introduced the issue too.

Suzuki


Hi All,
Thanks for the discussion and feedbacks.

This patch fixes the shellcheck warning introduced in function "cs_etm_dev_name". But with the 
changes that Suzuki suggested, we won't need the function "cs_etm_dev_name" since the code will use 
"/sys/bus/event_source/devices/cs_etm/" .  In that case, can I drop this patch for now from this 
series ?



Yes, please. James will send out the proposed patch

Suzuki




Re: [PATCH 1/3] perf tests test_arm_coresight: Fix the shellcheck warning in latest test_arm_coresight.sh

2023-10-05 Thread Suzuki K Poulose

On 05/10/2023 06:02, Namhyung Kim wrote:

On Thu, Sep 28, 2023 at 9:11 PM Athira Rajeev
 wrote:


Running shellcheck on tests/shell/test_arm_coresight.sh
throws below warnings:

 In tests/shell/test_arm_coresight.sh line 15:
 cs_etm_path=$(find  /sys/bus/event_source/devices/cs_etm/ -name cpu* 
-print -quit)
   ^--^ SC2061: Quote the parameter to -name so the shell won't 
interpret it.

 In tests/shell/test_arm_coresight.sh line 20:
 if [ $archhver -eq 5 -a "$(printf "0x%X\n" $archpart)" = 
"0xA13" ] ; then
  ^-- SC2166: Prefer [ p ] && [ q ] as [ p 
-a q ] is not well defined

This warning is observed after commit:
"commit bb350847965d ("perf test: Update cs_etm testcase for Arm ETE")"

Fixed this issue by using quoting 'cpu*' for SC2061 and
using "&&" in line number 20 for SC2166 warning

Fixes: bb350847965d ("perf test: Update cs_etm testcase for Arm ETE")
Signed-off-by: Athira Rajeev 


Thanks for the fix.

Nothing to do with this patch, but I am wondering if the original patch
is over engineered and may not be future proof.

e.g.,

cs_etm_dev_name() {
+	cs_etm_path=$(find  /sys/bus/event_source/devices/cs_etm/ -name cpu* 
-print -quit)


Right there you got the device name and we can easily deduce the name of
the "ETM" node.

e.g,:
etm=$(basename $(readlink cs_etm_path) | sed "s/[0-9]\+$//")

And practically, nobody prevents an ETE mixed with an ETM on a "hybrid"
system (hopefully, no one builds it ;-))

Also, instead of hardcoding "ete" and "etm" prefixes from the arch part,
we should simply use the cpu nodes from :

/sys/bus/event_source/devices/cs_etm/

e.g.,

arm_cs_etm_traverse_path_test() {
# Iterate for every ETM device
for c in /sys/bus/event_source/devices/cs_etm/cpu*; do
# Read the link to be on the safer side
dev=`readlink $c`

# Find the ETM device belonging to which CPU
cpu=`cat $dev/cpu`

# Use depth-first search (DFS) to iterate outputs
arm_cs_iterate_devices $dev $cpu
done;
}





You'd better add Coresight folks on this.
Maybe this file was missing in the MAINTAINERS file.


And the original author of the commit, that introduced the issue too.

Suzuki



Thanks,
Namhyung



---
  tools/perf/tests/shell/test_arm_coresight.sh | 4 ++--
  1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/perf/tests/shell/test_arm_coresight.sh 
b/tools/perf/tests/shell/test_arm_coresight.sh
index fe78c4626e45..f2115dfa24a5 100755
--- a/tools/perf/tests/shell/test_arm_coresight.sh
+++ b/tools/perf/tests/shell/test_arm_coresight.sh
@@ -12,12 +12,12 @@
  glb_err=0

  cs_etm_dev_name() {
-   cs_etm_path=$(find  /sys/bus/event_source/devices/cs_etm/ -name cpu* 
-print -quit)
+   cs_etm_path=$(find  /sys/bus/event_source/devices/cs_etm/ -name 'cpu*' 
-print -quit)
 trcdevarch=$(cat ${cs_etm_path}/mgmt/trcdevarch)
 archhver=$((($trcdevarch >> 12) & 0xf))
 archpart=$(($trcdevarch & 0xfff))

-   if [ $archhver -eq 5 -a "$(printf "0x%X\n" $archpart)" = "0xA13" ] ; 
then
+   if [ $archhver -eq 5 ] && [ "$(printf "0x%X\n" $archpart)" = "0xA13" ] 
; then
 echo "ete"
 else
 echo "etm"
--
2.31.1





Re: [PATCH 09/30] coresight: cpu-debug: Replace mutex with mutex_trylock on panic notifier

2022-05-09 Thread Suzuki K Poulose

Hi

On 09/05/2022 14:09, Guilherme G. Piccoli wrote:

On 28/04/2022 05:11, Suzuki K Poulose wrote:

Hi Guilherme,

On 27/04/2022 23:49, Guilherme G. Piccoli wrote:

The panic notifier infrastructure executes registered callbacks when
a panic event happens - such callbacks are executed in atomic context,
with interrupts and preemption disabled in the running CPU and all other
CPUs disabled. That said, mutexes in such context are not a good idea.

This patch replaces a regular mutex with a mutex_trylock safer approach;
given the nature of the mutex used in the driver, it should be pretty
uncommon being unable to acquire such mutex in the panic path, hence
no functional change should be observed (and if it is, that would be
likely a deadlock with the regular mutex).

Fixes: 2227b7c74634 ("coresight: add support for CPU debug module")
Cc: Leo Yan 
Cc: Mathieu Poirier 
Cc: Mike Leach 
Cc: Suzuki K Poulose 
Signed-off-by: Guilherme G. Piccoli 


How would you like to proceed with queuing this ? I am happy
either way. In case you plan to push this as part of this
series (I don't see any potential conflicts) :

Reviewed-by: Suzuki K Poulose 


Hi Suzuki, some other maintainers are taking the patches to their next
branches for example. I'm working on V2, and I guess in the end would be
nice to reduce the size of the series a bit.

So, do you think you could pick this one for your coresight/next branch
(or even for rc cycle, your call - this is really a fix)?
This way, I won't re-submit this one in V2, since it's gonna be merged
already in your branch.


I have queued this to coresight/next.

Thanks
Suzuki


Re: [PATCH 20/30] panic: Add the panic informational notifier list

2022-04-28 Thread Suzuki K Poulose

On 27/04/2022 23:49, Guilherme G. Piccoli wrote:

The goal of this new panic notifier is to allow its users to
register callbacks to run earlier in the panic path than they
currently do. This aims at informational mechanisms, like dumping
kernel offsets and showing device error data (in case it's simple
registers reading, for example) as well as mechanisms to disable
log flooding (like hung_task detector / RCU warnings) and the
tracing dump_on_oops (when enabled).

Any (non-invasive) information that should be provided before
kmsg_dump() as well as log flooding preventing code should fit
here, as long it offers relatively low risk for kdump.

For now, the patch is almost a no-op, although it changes a bit
the ordering in which some panic notifiers are executed - specially
affected by this are the notifiers responsible for disabling the
hung_task detector / RCU warnings, which now run first. In a
subsequent patch, the panic path will be refactored, then the
panic informational notifiers will effectively run earlier,
before ksmg_dump() (and usually before kdump as well).

We also defer documenting it all properly in the subsequent
refactor patch. Finally, while at it, we removed some useless
header inclusions too.

Cc: Benjamin Herrenschmidt 
Cc: Catalin Marinas 
Cc: Florian Fainelli 
Cc: Frederic Weisbecker 
Cc: "H. Peter Anvin" 
Cc: Hari Bathini 
Cc: Joel Fernandes 
Cc: Jonathan Hunter 
Cc: Josh Triplett 
Cc: Lai Jiangshan 
Cc: Leo Yan 
Cc: Mathieu Desnoyers 
Cc: Mathieu Poirier 
Cc: Michael Ellerman 
Cc: Mike Leach 
Cc: Mikko Perttunen 
Cc: Neeraj Upadhyay 
Cc: Nicholas Piggin 
Cc: Paul Mackerras 
Cc: Suzuki K Poulose 
Cc: Thierry Reding 
Cc: Thomas Bogendoerfer 
Signed-off-by: Guilherme G. Piccoli 
---
  arch/arm64/kernel/setup.c | 2 +-
  arch/mips/kernel/relocate.c   | 2 +-
  arch/powerpc/kernel/setup-common.c| 2 +-
  arch/x86/kernel/setup.c   | 2 +-
  drivers/bus/brcmstb_gisb.c| 2 +-
  drivers/hwtracing/coresight/coresight-cpu-debug.c | 4 ++--
  drivers/soc/tegra/ari-tegra186.c  | 3 ++-
  include/linux/panic_notifier.h| 1 +
  kernel/hung_task.c| 3 ++-
  kernel/panic.c| 4 
  kernel/rcu/tree.c | 1 -
  kernel/rcu/tree_stall.h   | 3 ++-
  kernel/trace/trace.c  | 2 +-
  13 files changed, 19 insertions(+), 12 deletions(-)



...


diff --git a/drivers/hwtracing/coresight/coresight-cpu-debug.c 
b/drivers/hwtracing/coresight/coresight-cpu-debug.c
index 1874df7c6a73..7b1012454525 100644
--- a/drivers/hwtracing/coresight/coresight-cpu-debug.c
+++ b/drivers/hwtracing/coresight/coresight-cpu-debug.c
@@ -535,7 +535,7 @@ static int debug_func_init(void)
_func_knob_fops);
  
  	/* Register function to be called for panic */

-   ret = atomic_notifier_chain_register(_notifier_list,
+   ret = atomic_notifier_chain_register(_info_list,
 _notifier);
if (ret) {
pr_err("%s: unable to register notifier: %d\n",
@@ -552,7 +552,7 @@ static int debug_func_init(void)
  
  static void debug_func_exit(void)

  {
-   atomic_notifier_chain_unregister(_notifier_list,
+   atomic_notifier_chain_unregister(_info_list,
 _notifier);
debugfs_remove_recursive(debug_debugfs_dir);
  }


Acked-by: Suzuki K Poulose 



Re: [PATCH 09/30] coresight: cpu-debug: Replace mutex with mutex_trylock on panic notifier

2022-04-28 Thread Suzuki K Poulose

Hi Guilherme,

On 27/04/2022 23:49, Guilherme G. Piccoli wrote:

The panic notifier infrastructure executes registered callbacks when
a panic event happens - such callbacks are executed in atomic context,
with interrupts and preemption disabled in the running CPU and all other
CPUs disabled. That said, mutexes in such context are not a good idea.

This patch replaces a regular mutex with a mutex_trylock safer approach;
given the nature of the mutex used in the driver, it should be pretty
uncommon being unable to acquire such mutex in the panic path, hence
no functional change should be observed (and if it is, that would be
likely a deadlock with the regular mutex).

Fixes: 2227b7c74634 ("coresight: add support for CPU debug module")
Cc: Leo Yan 
Cc: Mathieu Poirier 
Cc: Mike Leach 
Cc: Suzuki K Poulose 
Signed-off-by: Guilherme G. Piccoli 


How would you like to proceed with queuing this ? I am happy
either way. In case you plan to push this as part of this
series (I don't see any potential conflicts) :

Reviewed-by: Suzuki K Poulose 


Re: [PATCH v2 31/39] docs: ABI: cleanup several ABI documents

2020-10-30 Thread Suzuki K Poulose

On 10/30/20 7:40 AM, Mauro Carvalho Chehab wrote:

There are some ABI documents that, while they don't generate
any warnings, they have issues when parsed by get_abi.pl script
on its output result.

Address them, in order to provide a clean output.

Acked-by: Jonathan Cameron  #for IIO
Reviewed-by: Tom Rix  # for fpga-manager
Reviewed-By: Kajol Jain # for 
sysfs-bus-event_source-devices-hv_gpci and sysfs-bus-event_source-devices-hv_24x7
Acked-by: Oded Gabbay  # for Habanalabs
Acked-by: Vaibhav Jain  # for sysfs-bus-papr-pmem
Signed-off-by: Mauro Carvalho Chehab 





  .../testing/sysfs-bus-coresight-devices-etb10 |   5 +-

For the above,

Acked-by: Suzuki K Poulose 


Re: [PATCH v2 01/20] perf/doc: update design.txt for exclude_{host|guest} flags

2018-11-26 Thread Suzuki K Poulose

Hi Andrew,

On 26/11/2018 11:12, Andrew Murray wrote:

Update design.txt to reflect the presence of the exclude_host
and exclude_guest perf flags.

Signed-off-by: Andrew Murray 


Thanks a lot for adding this !


---
  tools/perf/design.txt | 4 
  1 file changed, 4 insertions(+)

diff --git a/tools/perf/design.txt b/tools/perf/design.txt
index a28dca2..5b2b23b 100644
--- a/tools/perf/design.txt
+++ b/tools/perf/design.txt
@@ -222,6 +222,10 @@ The 'exclude_user', 'exclude_kernel' and 'exclude_hv' bits 
provide a
  way to request that counting of events be restricted to times when the
  CPU is in user, kernel and/or hypervisor mode.
  
+Furthermore the 'exclude_host' and 'exclude_guest' bits provide a way

+to request counting of events restricted to guest and host contexts when
+using KVM virtualisation.


minor nit: could we generalise this to :

"using Linux as the hypervisor".

Otherwise, looks good to me.

Cheers
Suzuki


Re: [RFT PATCH -next v3] [BUGFIX] kprobes: Fix Failed to find blacklist error on ia64 and ppc64

2014-06-19 Thread Suzuki K. Poulose
On 06/19/2014 10:22 AM, Masami Hiramatsu wrote:
 (2014/06/19 10:30), Michael Ellerman wrote:
 On Wed, 2014-06-18 at 17:46 +0900, Masami Hiramatsu wrote:
 (2014/06/18 16:56), Michael Ellerman wrote:
 On Fri, 2014-06-06 at 15:38 +0900, Masami Hiramatsu wrote:
 Ping?

 I guess this should go to 3.16 branch, shouldn't it?

 diff --git a/arch/powerpc/include/asm/types.h 
 b/arch/powerpc/include/asm/types.h
 index bfb6ded..8b89d65 100644
 --- a/arch/powerpc/include/asm/types.h
 +++ b/arch/powerpc/include/asm/types.h
 @@ -25,6 +25,17 @@ typedef struct {
  unsigned long env;
  } func_descr_t;
  
 +#if defined(CONFIG_PPC64)  (!defined(_CALL_ELF) || _CALL_ELF == 1)
 +/*
 + * On PPC64 ABIv1 the function pointer actually points to the
 + * function's descriptor. The first entry in the descriptor is the
 + * address of the function text.
 + */
 +#define function_entry(fn)  (((func_descr_t *)(fn))-entry)
 +#else
 +#define function_entry(fn)  ((unsigned long)(fn))
 +#endif

 We already have ppc_function_entry(), can't you use that?

 I'd like to ask you whether the address which ppc_function_entry() returns 
 on
 PPC ABIv2 is really same address in kallsyms or not.
 As you can see, kprobes uses function_entry() to get the actual entry 
 address
 where kallsyms knows. I have not much information about that, but it seems 
 that
 the global entry point is the address which kallsyms knows, isn't it?

 OK. I'm not sure off the top of my head which address kallsyms knows about, 
 but
 yes it's likely that it is the global entry point.

 I recently sent a patch to add ppc_global_function_entry(), because we need 
 it
 in the ftrace code. Once that is merged you could use that.
 
 Yeah, I could use that. But since this is used in arch-independent code (e.g. 
 IA64
 needs similar macro), I think we'd better define function_entry() in 
 asm/types.h for
 general use (for kallsyms), and rename ppc_function_entry to 
 local_function_entry()
 in asm/code-patching.h.
 
 
 How do you hit the original problem, you don't actually specify in your 
 commit
 message? Something with kprobes obviously, but what exactly? I'll try and
 reproduce it here.
 
 Ah, those messages should be shown in dmesg when booting if it doesn't work,
 because the messages are printed by initialization process of kprobe 
 blacklist.
 So, reproducing it is just enabling CONFIG_KPROBES and boot it.
Well,  we don't get those messages on Power, since the kallsyms has the
entries for .function_name. The correct way to verify is, either  :

1) Dump the black_list via xmon ( see :
https://lkml.org/lkml/2014/5/29/893 ) and verify the entries.

or

2) Issue a kprobe on a black listed entry and hit a success,(which we
will, since we don't check the actual function address).

Thanks
Suzuki


 
 Thank you,
 

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [RFT PATCH -next v3] [BUGFIX] kprobes: Fix Failed to find blacklist error on ia64 and ppc64

2014-06-19 Thread Suzuki K. Poulose
On 06/19/2014 12:56 PM, Masami Hiramatsu wrote:
 (2014/06/19 15:40), Suzuki K. Poulose wrote:
 On 06/19/2014 10:22 AM, Masami Hiramatsu wrote:
 (2014/06/19 10:30), Michael Ellerman wrote:
 On Wed, 2014-06-18 at 17:46 +0900, Masami Hiramatsu wrote:
 (2014/06/18 16:56), Michael Ellerman wrote:
 On Fri, 2014-06-06 at 15:38 +0900, Masami Hiramatsu wrote:
 Ping?

 I guess this should go to 3.16 branch, shouldn't it?

 diff --git a/arch/powerpc/include/asm/types.h 
 b/arch/powerpc/include/asm/types.h
 index bfb6ded..8b89d65 100644
 --- a/arch/powerpc/include/asm/types.h
 +++ b/arch/powerpc/include/asm/types.h
 @@ -25,6 +25,17 @@ typedef struct {
unsigned long env;
  } func_descr_t;
  
 +#if defined(CONFIG_PPC64)  (!defined(_CALL_ELF) || _CALL_ELF == 1)
 +/*
 + * On PPC64 ABIv1 the function pointer actually points to the
 + * function's descriptor. The first entry in the descriptor is the
 + * address of the function text.
 + */
 +#define function_entry(fn)(((func_descr_t *)(fn))-entry)
 +#else
 +#define function_entry(fn)((unsigned long)(fn))
 +#endif

 We already have ppc_function_entry(), can't you use that?

 I'd like to ask you whether the address which ppc_function_entry() 
 returns on
 PPC ABIv2 is really same address in kallsyms or not.
 As you can see, kprobes uses function_entry() to get the actual entry 
 address
 where kallsyms knows. I have not much information about that, but it 
 seems that
 the global entry point is the address which kallsyms knows, isn't it?

 OK. I'm not sure off the top of my head which address kallsyms knows 
 about, but
 yes it's likely that it is the global entry point.

 I recently sent a patch to add ppc_global_function_entry(), because we 
 need it
 in the ftrace code. Once that is merged you could use that.

 Yeah, I could use that. But since this is used in arch-independent code 
 (e.g. IA64
 needs similar macro), I think we'd better define function_entry() in 
 asm/types.h for
 general use (for kallsyms), and rename ppc_function_entry to 
 local_function_entry()
 in asm/code-patching.h.


 How do you hit the original problem, you don't actually specify in your 
 commit
 message? Something with kprobes obviously, but what exactly? I'll try and
 reproduce it here.

 Ah, those messages should be shown in dmesg when booting if it doesn't work,
 because the messages are printed by initialization process of kprobe 
 blacklist.
 So, reproducing it is just enabling CONFIG_KPROBES and boot it.
 Well,  we don't get those messages on Power, since the kallsyms has the
 entries for .function_name. The correct way to verify is, either  :
 
 Hmm, that seems another issue on powerpc. Is that expected(and designed)
 behavior?
AFAIK, yes, it is.
To be more precise :

we have 'foo' and '.foo' for a function foo(), where 'foo' points to the
function_entry and '.foo' points to the actual function.

So, a kallsyms_lookup_size_offset() on both 'foo' and '.foo' will return
a hit. So, if we make sure we use the value of '.foo' (by using the
appropriate macros) we should be fine.

 And if so, how I can verify when initializing blacklist?
 (should I better use kallsyms_lookup() and kallsyms_lookup_name() for
 verification?)
One way to verify would be to make sure the symbol starts with '.' from
the result of the current kallsyms_lookup_size_offset() for PPC.

Thanks
Suzuki

 
 Thank you,
 

 1) Dump the black_list via xmon ( see :
 https://lkml.org/lkml/2014/5/29/893 ) and verify the entries.

 or

 2) Issue a kprobe on a black listed entry and hit a success,(which we
 will, since we don't check the actual function address).

 Thanks
 Suzuki



 Thank you,


___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [RFT PATCH -next v2] [BUGFIX] kprobes: Fix Failed to find blacklist error on ia64 and ppc64

2014-05-29 Thread Suzuki K. Poulose
On 05/27/2014 12:01 PM, Masami Hiramatsu wrote:
 On ia64 and ppc64, the function pointer does not point the
 entry address of the function, but the address of function
 discriptor (which contains the entry address and misc
 data.) Since the kprobes passes the function pointer stored
 by NOKPROBE_SYMBOL() to kallsyms_lookup_size_offset() for
 initalizing its blacklist, it fails and reports many errors
 as below.
 
   Failed to find blacklist 000101316830
   Failed to find blacklist 0001013000f0a000
   Failed to find blacklist 000101315f70a000
   Failed to find blacklist 000101324c80a000
   Failed to find blacklist 0001013063f0a000
   Failed to find blacklist 000101327800a000
   Failed to find blacklist 0001013277f0a000
   Failed to find blacklist 000101315a70a000
   Failed to find blacklist 0001013277e0a000
   Failed to find blacklist 000101305a20a000
   Failed to find blacklist 0001013277d0a000
   Failed to find blacklist 00010130bdc0a000
   Failed to find blacklist 00010130dc20a000
   Failed to find blacklist 000101309a00a000
   Failed to find blacklist 0001013277c0a000
   Failed to find blacklist 0001013277b0a000
   Failed to find blacklist 0001013277a0a000
   Failed to find blacklist 000101327790a000
   Failed to find blacklist 000101303140a000
   Failed to find blacklist 0001013a3280a000
 
 To fix this bug, this introduces function_entry() macro to
 retrieve the entry address from the given function pointer,
 and uses for kallsyms_lookup_size_offset() while initializing
 blacklist.
 
 Changes in V2:
  - Use function_entry() macro when lookin up symbols instead
of storing it.
  - Update for the latest -next.
 
 Signed-off-by: Masami Hiramatsu masami.hiramatsu...@hitachi.com
 Reported-by: Tony Luck tony.l...@gmail.com
 Cc: Suzuki K. Poulose suz...@in.ibm.com
 Cc: Tony Luck tony.l...@intel.com
 Cc: Fenghua Yu fenghua...@intel.com
 Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
 Cc: Paul Mackerras pau...@samba.org
 Cc: Ananth N Mavinakayanahalli ana...@in.ibm.com
 Cc: Kevin Hao haoke...@gmail.com
 Cc: linux-i...@vger.kernel.org
 Cc: linux-ker...@vger.kernel.org
 Cc: linuxppc-dev@lists.ozlabs.org
 ---
  arch/ia64/include/asm/types.h|2 ++
  arch/powerpc/include/asm/types.h |   11 +++
  include/linux/types.h|4 
  kernel/kprobes.c |4 +++-
  4 files changed, 20 insertions(+), 1 deletion(-)
 
 diff --git a/arch/ia64/include/asm/types.h b/arch/ia64/include/asm/types.h
 index 4c351b1..95279dd 100644
 --- a/arch/ia64/include/asm/types.h
 +++ b/arch/ia64/include/asm/types.h
 @@ -27,5 +27,7 @@ struct fnptr {
   unsigned long gp;
  };
  
 +#define function_entry(fn) (((struct fnptr *)(fn))-ip)
 +
  #endif /* !__ASSEMBLY__ */
  #endif /* _ASM_IA64_TYPES_H */
 diff --git a/arch/powerpc/include/asm/types.h 
 b/arch/powerpc/include/asm/types.h
 index bfb6ded..8b89d65 100644
 --- a/arch/powerpc/include/asm/types.h
 +++ b/arch/powerpc/include/asm/types.h
 @@ -25,6 +25,17 @@ typedef struct {
   unsigned long env;
  } func_descr_t;
  
 +#if defined(CONFIG_PPC64)  (!defined(_CALL_ELF) || _CALL_ELF == 1)
 +/*
 + * On PPC64 ABIv1 the function pointer actually points to the
 + * function's descriptor. The first entry in the descriptor is the
 + * address of the function text.
 + */
 +#define function_entry(fn)   (((func_descr_t *)(fn))-entry)
 +#else
 +#define function_entry(fn)   ((unsigned long)(fn))
 +#endif
 +
  #endif /* __ASSEMBLY__ */
  
  #endif /* _ASM_POWERPC_TYPES_H */
 diff --git a/include/linux/types.h b/include/linux/types.h
 index a0bb704..3b95369 100644
 --- a/include/linux/types.h
 +++ b/include/linux/types.h
 @@ -213,5 +213,9 @@ struct callback_head {
  };
  #define rcu_head callback_head
  
 +#ifndef function_entry
 +#define function_entry(fn)   ((unsigned long)(fn))
 +#endif
 +
  #endif /*  __ASSEMBLY__ */
  #endif /* _LINUX_TYPES_H */
 diff --git a/kernel/kprobes.c b/kernel/kprobes.c
 index 2ac9f13..3859c88 100644
 --- a/kernel/kprobes.c
 +++ b/kernel/kprobes.c
 @@ -32,6 +32,7 @@
   *   prasa...@in.ibm.com added function-return probes.
   */
  #include linux/kprobes.h
 +#include linux/types.h
  #include linux/hash.h
  #include linux/init.h
  #include linux/slab.h
 @@ -2042,7 +2043,8 @@ static int __init populate_kprobe_blacklist(unsigned 
 long *start,
   unsigned long offset = 0, size = 0;
  
   for (iter = start; iter  end; iter++) {
 - if (!kallsyms_lookup_size_offset(*iter, size, offset)) {
 + if (!kallsyms_lookup_size_offset(function_entry(*iter),
 +  size, offset)) {

On powerpc we will be able to resolve the *iter to func_descr and won't
get the below error with/without this patch. So we have to actually
verify the kprobe_blacklist contents to make sure everything is alright.

   pr_err(Failed to find blacklist %p\n, (void *)*iter);
   continue;
   }
 

There is a bug here.
You need to set

Re: [RFT PATCH -next ] [BUGFIX] kprobes: Fix Failed to find blacklist error on ia64 and ppc64

2014-05-26 Thread Suzuki K. Poulose
On 05/07/2014 05:25 PM, Masami Hiramatsu wrote:
 On ia64 and ppc64, the function pointer does not point the
 entry address of the function, but the address of function
 discriptor (which contains the entry address and misc
 data.) Since the kprobes passes the function pointer stored
 by NOKPROBE_SYMBOL() to kallsyms_lookup_size_offset() for
 initalizing its blacklist, it fails and reports many errors
 as below.
 
   Failed to find blacklist 000101316830
   Failed to find blacklist 0001013000f0a000
   Failed to find blacklist 000101315f70a000
   Failed to find blacklist 000101324c80a000
   Failed to find blacklist 0001013063f0a000
   Failed to find blacklist 000101327800a000
   Failed to find blacklist 0001013277f0a000
   Failed to find blacklist 000101315a70a000
   Failed to find blacklist 0001013277e0a000
   Failed to find blacklist 000101305a20a000
   Failed to find blacklist 0001013277d0a000
   Failed to find blacklist 00010130bdc0a000
   Failed to find blacklist 00010130dc20a000
   Failed to find blacklist 000101309a00a000
   Failed to find blacklist 0001013277c0a000
   Failed to find blacklist 0001013277b0a000
   Failed to find blacklist 0001013277a0a000
   Failed to find blacklist 000101327790a000
   Failed to find blacklist 000101303140a000
   Failed to find blacklist 0001013a3280a000
 
 To fix this bug, this introduces function_entry() macro to
 retrieve the entry address from the given function pointer,
 and uses it in NOKPROBE_SYMBOL().
 
 
 Signed-off-by: Masami Hiramatsu masami.hiramatsu...@hitachi.com
 Reported-by: Tony Luck tony.l...@gmail.com
 Cc: Tony Luck tony.l...@intel.com
 Cc: Fenghua Yu fenghua...@intel.com
 Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
 Cc: Paul Mackerras pau...@samba.org
 Cc: Ananth N Mavinakayanahalli ana...@in.ibm.com
 Cc: Kevin Hao haoke...@gmail.com
 Cc: linux-i...@vger.kernel.org
 Cc: linux-ker...@vger.kernel.org
 Cc: linuxppc-dev@lists.ozlabs.org
 ---
  arch/ia64/include/asm/types.h|2 ++
  arch/powerpc/include/asm/types.h |   11 +++
  include/linux/kprobes.h  |3 ++-
  include/linux/types.h|4 
  4 files changed, 19 insertions(+), 1 deletion(-)
 
 diff --git a/arch/ia64/include/asm/types.h b/arch/ia64/include/asm/types.h
 index 4c351b1..6ab7b6c 100644
 --- a/arch/ia64/include/asm/types.h
 +++ b/arch/ia64/include/asm/types.h
 @@ -27,5 +27,7 @@ struct fnptr {
   unsigned long gp;
  };
  
 +#define constant_function_entry(fn) (((struct fnptr *)(fn))-ip)
 +
  #endif /* !__ASSEMBLY__ */
  #endif /* _ASM_IA64_TYPES_H */
 diff --git a/arch/powerpc/include/asm/types.h 
 b/arch/powerpc/include/asm/types.h
 index bfb6ded..fd297b8 100644
 --- a/arch/powerpc/include/asm/types.h
 +++ b/arch/powerpc/include/asm/types.h
 @@ -25,6 +25,17 @@ typedef struct {
   unsigned long env;
  } func_descr_t;
  
 +#if defined(CONFIG_PPC64)  (!defined(_CALL_ELF) || _CALL_ELF == 1)
 +/*
 + * On PPC64 ABIv1 the function pointer actually points to the
 + * function's descriptor. The first entry in the descriptor is the
 + * address of the function text.
 + */
 +#define constant_function_entry(fn)  (((func_descr_t *)(fn))-entry)
 +#else
 +#define constant_function_entry(fn)  ((unsigned long)(fn))
 +#endif
 +
  #endif /* __ASSEMBLY__ */
  
  #endif /* _ASM_POWERPC_TYPES_H */
 diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
 index e059507..637eafe 100644
 --- a/include/linux/kprobes.h
 +++ b/include/linux/kprobes.h
 @@ -40,6 +40,7 @@
  #include linux/rcupdate.h
  #include linux/mutex.h
  #include linux/ftrace.h
 +#include linux/types.h
  
  #ifdef CONFIG_KPROBES
  #include asm/kprobes.h
 @@ -485,7 +486,7 @@ static inline int enable_jprobe(struct jprobe *jp)
  #define __NOKPROBE_SYMBOL(fname) \
  static unsigned long __used  \
   __attribute__((section(_kprobe_blacklist)))   \
 - _kbl_addr_##fname = (unsigned long)fname;
 + _kbl_addr_##fname = constant_function_entry(fname);
  #define NOKPROBE_SYMBOL(fname)   __NOKPROBE_SYMBOL(fname)


Throws up build errors for me :

  CC  kernel/notifier.o
kernel/notifier.c:105:1: error: initializer element is not constant
 NOKPROBE_SYMBOL(notifier_call_chain);
 ^
kernel/notifier.c:188:1: error: initializer element is not constant
 NOKPROBE_SYMBOL(__atomic_notifier_call_chain);
 ^
kernel/notifier.c:196:1: error: initializer element is not constant
 NOKPROBE_SYMBOL(atomic_notifier_call_chain);
 ^
kernel/notifier.c:546:1: error: initializer element is not constant
 NOKPROBE_SYMBOL(notify_die);
 ^
make[1]: *** [kernel/notifier.o] Error 1
make: *** [kernel] Error 2

Thanks
Suzuki

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

[PATCH] powerpc: Set the NOTE type for SPE regset

2013-08-27 Thread Suzuki K. Poulose

The regset defintion for SPE doesn't have the core_note_type
set, which prevents it from being dumped. Add the note type
NT_PPC_SPE for SPE regset.

Signed-off-by: Suzuki K Poulose suz...@in.ibm.com
Cc: Roland McGrath rol...@hack.frob.com
---
 arch/powerpc/kernel/ptrace.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/ptrace.c b/arch/powerpc/kernel/ptrace.c
index 9a0d24c..16edc5d 100644
--- a/arch/powerpc/kernel/ptrace.c
+++ b/arch/powerpc/kernel/ptrace.c
@@ -657,7 +657,7 @@ static const struct user_regset native_regsets[] = {
 #endif
 #ifdef CONFIG_SPE
[REGSET_SPE] = {
-   .n = 35,
+   .core_note_type = NT_PPC_SPE, .n = 35,
.size = sizeof(u32), .align = sizeof(u32),
.active = evr_active, .get = evr_get, .set = evr_set
},

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH] powerpc: don't flush/invalidate the d/icache for an unknown relocation type

2013-06-27 Thread Suzuki K. Poulose

On 06/27/2013 06:39 AM, Kevin Hao wrote:

For an unknown relocation type since the value of r4 is just the 8bit
relocation type, the sum of r4 and r7 may yield an invalid memory
address. For example:
 In normal case:
  r4 = c00x
  r7 = 4000
  r4 + r7 = 000x

 For an unknown relocation type:
  r4 = 00xx
  r7 = 4000
  r4 + r7 = 40xx
40xx is an invalid memory address for a board which has just
512M memory.

And for operations such as dcbst or icbi may cause bus error for an
invalid memory address on some platforms and then cause the board
reset. So we should skip the flush/invalidate the d/icache for
an unknown relocation type.



Good catch. Thanks for the fix.

Acked-by: Suzuki K. Poulose suz...@in.ibm.com

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH] kexec/ppc: Fix kernel program entry point while changing the load addr

2013-03-03 Thread Suzuki K. Poulose

On 03/04/2013 07:11 AM, Simon Horman wrote:

[ Cc: linuxppc-dev@lists.ozlabs.org ]

On Sun, Mar 03, 2013 at 01:06:00PM +0530, Suzuki K. Poulose wrote:

From: Suzuki K. Poulose suz...@in.ibm.com

uImage probe fills the entry point (ep) based on the load_addr
from the uImage headers. If we change the load_addr, we should
accordingly update the entry point.

For ELF, calculate the offset of e_entry from the virtual start
address and add it to the physical start address to find the
physical address of kernel entry.

i.e,
   pa (e_entry) = pa(phdr[0].p_vaddr) + (e_entry - phdr[0].p_vaddr)
= kernel_addr + (e_entry - phdr[0].p_vaddr)


Would it be possible for someone to provide a review of this change?

To make it bit more clear :

The entry point of the kernel is usually at 0 offset from the first
PT_LOAD section. The current code makes this assumption and uses the 
pa(phdr[0].p_vaddr) as the kernel entry.


But this *may* not be true always, in such a case the kexec would fail.
While I fixed the uImage case, I thought it would be better to handle 
the same case in ELF.


Btw, this calculation is not specific to ppc32.

Thanks
Suzuki





Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Cc: Sebastian Andrzej Siewior bige...@linutronix.de
Cc: Matthew McClintock m...@freescale.com
---
  kexec/arch/ppc/kexec-elf-ppc.c|   12 
  kexec/arch/ppc/kexec-uImage-ppc.c |6 +-
  2 files changed, 13 insertions(+), 5 deletions(-)

diff --git a/kexec/arch/ppc/kexec-elf-ppc.c b/kexec/arch/ppc/kexec-elf-ppc.c
index 8e408cc..5f63a64 100644
--- a/kexec/arch/ppc/kexec-elf-ppc.c
+++ b/kexec/arch/ppc/kexec-elf-ppc.c
@@ -397,10 +397,14 @@ int elf_ppc_load(int argc, char **argv,   const char 
*buf, off_t len,
die(Error device tree not loadded to address it was expecting to be 
loaded too!\n);
}

-   /* set various variables for the purgatory  ehdr.e_entry is a
-* virtual address, we can use kernel_addr which
-* should be the physical start address of the kernel */
-   addr = kernel_addr;
+   /*
+* set various variables for the purgatory.
+* ehdr.e_entry is a virtual address. we know physical start
+* address of the kernel (kernel_addr). Find the offset of
+* e_entry from the virtual start address(e_phdr[0].p_vaddr)
+* and calculate the actual physical address of the 'kernel entry'.
+*/
+   addr = kernel_addr + (ehdr.e_entry - ehdr.e_phdr[0].p_vaddr);
elf_rel_set_symbol(info-rhdr, kernel, addr, sizeof(addr));

addr = dtb_addr;
diff --git a/kexec/arch/ppc/kexec-uImage-ppc.c 
b/kexec/arch/ppc/kexec-uImage-ppc.c
index e0bc7bb..900cd16 100644
--- a/kexec/arch/ppc/kexec-uImage-ppc.c
+++ b/kexec/arch/ppc/kexec-uImage-ppc.c
@@ -159,15 +159,19 @@ static int ppc_load_bare_bits(int argc, char **argv, 
const char *buf,

/*
 * If the provided load_addr cannot be allocated, find a new
-* area.
+* area. Rebase the entry point based on the new load_addr.
 */
if (!valid_memory_range(info, load_addr, load_addr + (len + _1MiB))) {
+   int ep_offset = ep - load_addr;
+
load_addr = locate_hole(info, len + _1MiB, 0, 0, max_addr, 1);
if (load_addr == ULONG_MAX) {
printf(Can't allocate memory for kernel of len %ld\n,
len + _1MiB);
return -1;
}
+
+   ep = load_addr + ep_offset;
}

add_segment(info, buf, len, load_addr, len + _1MiB);


___
kexec mailing list
ke...@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev



___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH] uprobes/powerpc: Add dependency on single step emulation

2013-01-07 Thread Suzuki K. Poulose
From: Suzuki K. Poulose suz...@in.ibm.com

Uprobes uses emulate_step in sstep.c, but we haven't explicitly specified
the dependency. On pseries HAVE_HW_BREAKPOINT protects us, but 44x has no
such luxury.

Consolidate other users that depend on sstep and create a new config option.

Signed-off-by: Ananth N Mavinakayanahalli ana...@in.ibm.com
Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Cc: linuxppc-...@ozlabs.org
Cc: sta...@vger.kernel.org
---
 arch/powerpc/Kconfig  |4 
 arch/powerpc/lib/Makefile |4 +---
 2 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 17903f1..dabe429 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -275,6 +275,10 @@ config PPC_ADV_DEBUG_DAC_RANGE
depends on PPC_ADV_DEBUG_REGS  44x
default y
 
+config PPC_EMULATE_SSTEP
+   bool
+   default y if KPROBES || UPROBES || XMON || HAVE_HW_BREAKPOINT
+
 source init/Kconfig
 
 source kernel/Kconfig.freezer
diff --git a/arch/powerpc/lib/Makefile b/arch/powerpc/lib/Makefile
index 746e0c8..35baad9 100644
--- a/arch/powerpc/lib/Makefile
+++ b/arch/powerpc/lib/Makefile
@@ -19,9 +19,7 @@ obj-$(CONFIG_PPC64)   += copypage_64.o copyuser_64.o \
   checksum_wrappers_64.o hweight_64.o \
   copyuser_power7.o string_64.o copypage_power7.o \
   memcpy_power7.o
-obj-$(CONFIG_XMON) += sstep.o ldstfp.o
-obj-$(CONFIG_KPROBES)  += sstep.o ldstfp.o
-obj-$(CONFIG_HAVE_HW_BREAKPOINT)   += sstep.o ldstfp.o
+obj-$(CONFIG_PPC_EMULATE_SSTEP)+= sstep.o ldstfp.o
 
 ifeq ($(CONFIG_PPC64),y)
 obj-$(CONFIG_SMP)  += locks.o

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH v2 1/4] kprobes/powerpc: Do not disable External interrupts during single step

2012-12-10 Thread Suzuki K. Poulose

On 12/03/2012 08:37 PM, Suzuki K. Poulose wrote:

From: Suzuki K. Poulose suz...@in.ibm.com

External/Decrement exceptions have lower priority than the Debug Exception.
So, we don't have to disable the External interrupts before a single step.
However, on BookE, Critical Input Exception(CE) has higher priority than a
Debug Exception. Hence we mask them.

Signed-off-by:  Suzuki K. Poulose suz...@in.ibm.com
Cc: Sebastian Andrzej Siewior bige...@linutronix.de
Cc: Ananth N Mavinakaynahalli ana...@in.ibm.com
Cc: Kumar Gala ga...@kernel.crashing.org
Cc: linuxppc-...@ozlabs.org
---
  arch/powerpc/kernel/kprobes.c |   10 +-
  1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index e88c643..4901b34 100644
--- a/arch/powerpc/kernel/kprobes.c
+++ b/arch/powerpc/kernel/kprobes.c
@@ -104,13 +104,13 @@ void __kprobes arch_remove_kprobe(struct kprobe *p)

  static void __kprobes prepare_singlestep(struct kprobe *p, struct pt_regs 
*regs)
  {
-   /* We turn off async exceptions to ensure that the single step will
-* be for the instruction we have the kprobe on, if we dont its
-* possible we'd get the single step reported for an exception handler
-* like Decrementer or External Interrupt */
-   regs-msr = ~MSR_EE;
regs-msr |= MSR_SINGLESTEP;
  #ifdef CONFIG_PPC_ADV_DEBUG_REGS
+   /*
+* We turn off Critical Input Exception(CE) to ensure that the single
+* step will be for the instruction we have the probe on; if we don't,
+* it is possible we'd get the single step reported for CE.
+*/
regs-msr = ~MSR_CE;
mtspr(SPRN_DBCR0, mfspr(SPRN_DBCR0) | DBCR0_IC | DBCR0_IDM);
  #ifdef CONFIG_PPC_47x



Ben, Kumar,

Could you please review this patch ?


Thanks
Suzuki

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH v2 1/4] kprobes/powerpc: Do not disable External interrupts during single step

2012-12-03 Thread Suzuki K. Poulose
From: Suzuki K. Poulose suz...@in.ibm.com

External/Decrement exceptions have lower priority than the Debug Exception.
So, we don't have to disable the External interrupts before a single step.
However, on BookE, Critical Input Exception(CE) has higher priority than a
Debug Exception. Hence we mask them.

Signed-off-by:  Suzuki K. Poulose suz...@in.ibm.com
Cc: Sebastian Andrzej Siewior bige...@linutronix.de
Cc: Ananth N Mavinakaynahalli ana...@in.ibm.com
Cc: Kumar Gala ga...@kernel.crashing.org
Cc: linuxppc-...@ozlabs.org
---
 arch/powerpc/kernel/kprobes.c |   10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index e88c643..4901b34 100644
--- a/arch/powerpc/kernel/kprobes.c
+++ b/arch/powerpc/kernel/kprobes.c
@@ -104,13 +104,13 @@ void __kprobes arch_remove_kprobe(struct kprobe *p)
 
 static void __kprobes prepare_singlestep(struct kprobe *p, struct pt_regs 
*regs)
 {
-   /* We turn off async exceptions to ensure that the single step will
-* be for the instruction we have the kprobe on, if we dont its
-* possible we'd get the single step reported for an exception handler
-* like Decrementer or External Interrupt */
-   regs-msr = ~MSR_EE;
regs-msr |= MSR_SINGLESTEP;
 #ifdef CONFIG_PPC_ADV_DEBUG_REGS
+   /* 
+* We turn off Critical Input Exception(CE) to ensure that the single
+* step will be for the instruction we have the probe on; if we don't,
+* it is possible we'd get the single step reported for CE.
+*/
regs-msr = ~MSR_CE;
mtspr(SPRN_DBCR0, mfspr(SPRN_DBCR0) | DBCR0_IC | DBCR0_IDM);
 #ifdef CONFIG_PPC_47x

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH v2 2/4] powerpc: Move the single step enable code to a generic path

2012-12-03 Thread Suzuki K. Poulose
From: Suzuki K. Poulose suz...@in.ibm.com

This patch moves the single step enable code used by kprobe to a generic
routine header so that, it can be re-used by other code, in this case,
uprobes. No functional changes.

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Cc: Ananth N Mavinakaynahalli ana...@in.ibm.com
Cc: Kumar Gala ga...@kernel.crashing.org
Cc: linuxppc-...@ozlabs.org
---
 arch/powerpc/include/asm/probes.h |   25 +
 arch/powerpc/kernel/kprobes.c |   21 +
 2 files changed, 26 insertions(+), 20 deletions(-)

diff --git a/arch/powerpc/include/asm/probes.h 
b/arch/powerpc/include/asm/probes.h
index 5f1e15b..f94a44f 100644
--- a/arch/powerpc/include/asm/probes.h
+++ b/arch/powerpc/include/asm/probes.h
@@ -38,5 +38,30 @@ typedef u32 ppc_opcode_t;
 #define is_trap(instr) (IS_TW(instr) || IS_TWI(instr))
 #endif /* CONFIG_PPC64 */
 
+#ifdef CONFIG_PPC_ADV_DEBUG_REGS
+#define MSR_SINGLESTEP (MSR_DE)
+#else
+#define MSR_SINGLESTEP (MSR_SE)
+#endif
+
+/* Enable single stepping for the current task */
+static inline void enable_single_step(struct pt_regs *regs)
+{
+   regs-msr |= MSR_SINGLESTEP;
+#ifdef CONFIG_PPC_ADV_DEBUG_REGS
+   /* 
+* We turn off Critical Input Exception(CE) to ensure that the single
+* step will be for the instruction we have the probe on; if we don't,
+* it is possible we'd get the single step reported for CE.
+*/
+   regs-msr = ~MSR_CE;
+   mtspr(SPRN_DBCR0, mfspr(SPRN_DBCR0) | DBCR0_IC | DBCR0_IDM);
+#ifdef CONFIG_PPC_47x
+   isync();
+#endif
+#endif
+}
+
+
 #endif /* __KERNEL__ */
 #endif /* _ASM_POWERPC_PROBES_H */
diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index 4901b34..92f1be7 100644
--- a/arch/powerpc/kernel/kprobes.c
+++ b/arch/powerpc/kernel/kprobes.c
@@ -36,12 +36,6 @@
 #include asm/sstep.h
 #include asm/uaccess.h
 
-#ifdef CONFIG_PPC_ADV_DEBUG_REGS
-#define MSR_SINGLESTEP (MSR_DE)
-#else
-#define MSR_SINGLESTEP (MSR_SE)
-#endif
-
 DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
 DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
 
@@ -104,20 +98,7 @@ void __kprobes arch_remove_kprobe(struct kprobe *p)
 
 static void __kprobes prepare_singlestep(struct kprobe *p, struct pt_regs 
*regs)
 {
-   regs-msr |= MSR_SINGLESTEP;
-#ifdef CONFIG_PPC_ADV_DEBUG_REGS
-   /* 
-* We turn off Critical Input Exception(CE) to ensure that the single
-* step will be for the instruction we have the probe on; if we don't,
-* it is possible we'd get the single step reported for CE.
-*/
-   regs-msr = ~MSR_CE;
-   mtspr(SPRN_DBCR0, mfspr(SPRN_DBCR0) | DBCR0_IC | DBCR0_IDM);
-#ifdef CONFIG_PPC_47x
-   isync();
-#endif
-#endif
-
+   enable_single_step(regs);
/*
 * On powerpc we should single step on the original
 * instruction even if the probed insn is a trap

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 1/4] perf/powerpc: Use uapi/unistd.h to fix build error

2012-11-20 Thread Suzuki K. Poulose

On 11/08/2012 12:48 AM, Sukadev Bhattiprolu wrote:


 From b8beef080260c1625c8f801105504a82005295e5 Mon Sep 17 00:00:00 2001
From: Sukadev Bhattiprolu suka...@linux.vnet.ibm.com
Date: Wed, 31 Oct 2012 11:21:28 -0700
Subject: [PATCH 1/4] perf/powerpc: Use uapi/unistd.h to fix build error

Use the 'unistd.h' from arch/powerpc/include/uapi to build the perf tool.

Signed-off-by: Sukadev Bhattiprolu suka...@linux.vnet.ibm.com

Without this patch, I couldn't build perf on powerpc, with 3.7.0-rc2

Tested-by: Suzuki K. Poulose suz...@in.ibm.com

Thanks
Suzuki

---
  tools/perf/perf.h |2 +-
  1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/tools/perf/perf.h b/tools/perf/perf.h
index 054182e..f4952da 100644
--- a/tools/perf/perf.h
+++ b/tools/perf/perf.h
@@ -26,7 +26,7 @@ void get_term_dimensions(struct winsize *ws);
  #endif

  #ifdef __powerpc__
-#include ../../arch/powerpc/include/asm/unistd.h
+#include ../../arch/powerpc/include/uapi/asm/unistd.h
  #define rmb() asm volatile (sync ::: memory)
  #define cpu_relax()   asm volatile ( ::: memory);
  #define CPUINFO_PROC  cpu



___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH v2 1/2] [powerpc] Change memory_limit from phys_addr_t to unsigned long long

2012-09-07 Thread Suzuki K. Poulose

On 09/07/2012 07:05 AM, Benjamin Herrenschmidt wrote:

On Tue, 2012-08-21 at 17:12 +0530, Suzuki K. Poulose wrote:

There are some device-tree nodes, whose values are of type phys_addr_t.
The phys_addr_t is variable sized based on the CONFIG_PHSY_T_64BIT.

Change these to a fixed unsigned long long for consistency.

This patch does the change only for memory_limit.

The following is a list of such variables which need the change:

  1) kernel_end, crashk_size - in arch/powerpc/kernel/machine_kexec.c

  2) (struct resource *)crashk_res.start - We could export a local static
 variable from machine_kexec.c.

Changing the above values might break the kexec-tools. So, I will
fix kexec-tools first to handle the different sized values and then change
  the above.

Suggested-by: Benjamin Herrenschmidt b...@kernel.crashing.org
Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
---


Breaks the build on some configs (with 32-bit phys_addr_t):


Sorry for that.


/home/benh/linux-powerpc-test/arch/powerpc/kernel/prom.c: In function
'early_init_devtree':
/home/benh/linux-powerpc-test/arch/powerpc/kernel/prom.c:664:25: error:
comparison of distinct pointer types lacks a cast

I'm fixing that myself this time but please be more careful.

Sure. Thanks Ben for fixing that.

Suzuki

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH v2 0/2][powerpc] Export memory_limit via device tree

2012-08-21 Thread Suzuki K. Poulose
The following series exports the linux memory_limit set by
the mem= parameter via device-tree, so that kexec-tools
can limit the crash regions to the actual memory used by
the kernel.

Change since V1:

 * Added a patch to change the type of memory_limit to a
   fixed size(unsigned long long) from 'phys_addr_t' (which
   is 32bit on some ppc32 and 64 bit on ppc64 and some ppc32)

 * Rebased the patch to use recently fixed prom_update_property()
   which would add the property if it didn't exist.

---

Suzuki K. Poulose (2):
  [powerpc] Change memory_limit from phys_addr_t to unsigned long long
  [powerpc] Export memory limit via device tree


 arch/powerpc/include/asm/setup.h|2 +-
 arch/powerpc/kernel/fadump.c|3 +--
 arch/powerpc/kernel/machine_kexec.c |   14 +-
 arch/powerpc/kernel/prom.c  |2 +-
 arch/powerpc/mm/mem.c   |2 +-
 5 files changed, 17 insertions(+), 6 deletions(-)

-- 
Suzuki

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH v2 1/2] [powerpc] Change memory_limit from phys_addr_t to unsigned long long

2012-08-21 Thread Suzuki K. Poulose
There are some device-tree nodes, whose values are of type phys_addr_t.
The phys_addr_t is variable sized based on the CONFIG_PHSY_T_64BIT.

Change these to a fixed unsigned long long for consistency.

This patch does the change only for memory_limit.

The following is a list of such variables which need the change:

 1) kernel_end, crashk_size - in arch/powerpc/kernel/machine_kexec.c

 2) (struct resource *)crashk_res.start - We could export a local static
variable from machine_kexec.c.

Changing the above values might break the kexec-tools. So, I will
fix kexec-tools first to handle the different sized values and then change
 the above.

Suggested-by: Benjamin Herrenschmidt b...@kernel.crashing.org
Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
---

 arch/powerpc/include/asm/setup.h|2 +-
 arch/powerpc/kernel/fadump.c|3 +--
 arch/powerpc/kernel/machine_kexec.c |2 +-
 arch/powerpc/kernel/prom.c  |2 +-
 arch/powerpc/mm/mem.c   |2 +-
 5 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
index d084ce1..8b9a306 100644
--- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -9,7 +9,7 @@ extern void ppc_printk_progress(char *s, unsigned short hex);
 extern unsigned int rtas_data;
 extern int mem_init_done;  /* set on boot once kmalloc can be called */
 extern int init_bootmem_done;  /* set once bootmem is available */
-extern phys_addr_t memory_limit;
+extern unsigned long long memory_limit;
 extern unsigned long klimit;
 extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask);
 
diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
index 18bdf74..06c8202 100644
--- a/arch/powerpc/kernel/fadump.c
+++ b/arch/powerpc/kernel/fadump.c
@@ -289,8 +289,7 @@ int __init fadump_reserve_mem(void)
else
memory_limit = memblock_end_of_DRAM();
printk(KERN_INFO Adjusted memory_limit for firmware-assisted
-dump, now %#016llx\n,
-   (unsigned long long)memory_limit);
+dump, now %#016llx\n, memory_limit);
}
if (memory_limit)
memory_boundary = memory_limit;
diff --git a/arch/powerpc/kernel/machine_kexec.c 
b/arch/powerpc/kernel/machine_kexec.c
index 5df..4074eff 100644
--- a/arch/powerpc/kernel/machine_kexec.c
+++ b/arch/powerpc/kernel/machine_kexec.c
@@ -165,7 +165,7 @@ void __init reserve_crashkernel(void)
if (memory_limit  memory_limit = crashk_res.end) {
memory_limit = crashk_res.end + 1;
printk(Adjusted memory limit for crashkernel, now 0x%llx\n,
-  (unsigned long long)memory_limit);
+  memory_limit);
}
 
printk(KERN_INFO Reserving %ldMB of memory at %ldMB 
diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
index f191bf0..c82c77d 100644
--- a/arch/powerpc/kernel/prom.c
+++ b/arch/powerpc/kernel/prom.c
@@ -78,7 +78,7 @@ static int __init early_parse_mem(char *p)
return 1;
 
memory_limit = PAGE_ALIGN(memparse(p, p));
-   DBG(memory limit = 0x%llx\n, (unsigned long long)memory_limit);
+   DBG(memory limit = 0x%llx\n, memory_limit);
 
return 0;
 }
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index baaafde..0a8f353 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -62,7 +62,7 @@
 
 int init_bootmem_done;
 int mem_init_done;
-phys_addr_t memory_limit;
+unsigned long long memory_limit;
 
 #ifdef CONFIG_HIGHMEM
 pte_t *kmap_pte;

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH v2 2/2] [powerpc] Export memory limit via device tree

2012-08-21 Thread Suzuki K. Poulose
The powerpc kernel doesn't export the memory limit enforced by 'mem='
kernel parameter. This is required for building the ELF header in
kexec-tools to limit the vmcore to capture only the used memory. On
powerpc the kexec-tools depends on the device-tree for memory related
information, unlike /proc/iomem on the x86.

Without this information, the kexec-tools assumes the entire System
RAM and vmcore creates an unnecessarily larger dump.

This patch exports the memory limit, if present, via
chosen/linux,memory-limit
property, so that the vmcore can be limited to the memory limit.

The prom_init seems to export this value in the same node. But doesn't
really
appear there.  Also the memory_limit gets adjusted with the processing of
crashkernel= parameter. This patch makes sure we get the actual limit.

The kexec-tools will use the value to limit the 'end' of the memory
regions.

Tested this patch on ppc64 and ppc32(ppc440) with a kexec-tools
patch by Mahesh.

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Tested-by: Mahesh J. Salgaonkar mah...@linux.vnet.ibm.com
---

 arch/powerpc/kernel/machine_kexec.c |   12 
 1 files changed, 12 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kernel/machine_kexec.c 
b/arch/powerpc/kernel/machine_kexec.c
index 4074eff..fa9f6c7 100644
--- a/arch/powerpc/kernel/machine_kexec.c
+++ b/arch/powerpc/kernel/machine_kexec.c
@@ -204,6 +204,12 @@ static struct property crashk_size_prop = {
.value = crashk_size,
 };
 
+static struct property memory_limit_prop = {
+   .name = linux,memory-limit,
+   .length = sizeof(unsigned long long),
+   .value = memory_limit,
+};
+
 static void __init export_crashk_values(struct device_node *node)
 {
struct property *prop;
@@ -223,6 +229,12 @@ static void __init export_crashk_values(struct device_node 
*node)
crashk_size = resource_size(crashk_res);
prom_add_property(node, crashk_size_prop);
}
+
+   /*
+* memory_limit is required by the kexec-tools to limit the
+* crash regions to the actual memory used.
+*/
+   prom_update_property(node, memory_limit_prop);
 }
 
 static int __init kexec_setup(void)

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH] [powerpc] Export memory limit via device tree

2012-07-19 Thread Suzuki K. Poulose

On 07/11/2012 11:06 AM, Benjamin Herrenschmidt wrote:

diff --git a/arch/powerpc/kernel/machine_kexec.c 
b/arch/powerpc/kernel/machine_kexec.c
index c957b12..0c9695d 100644
--- a/arch/powerpc/kernel/machine_kexec.c
+++ b/arch/powerpc/kernel/machine_kexec.c
@@ -207,6 +207,12 @@ static struct property crashk_size_prop = {
.value = crashk_size,
  };

+static struct property memory_limit_prop = {
+   .name = linux,memory-limit,
+   .length = sizeof(phys_addr_t),
+   .value = memory_limit,
+};
+


AFAIK. phys_addr_t can change size, so instead make it point to a known
fixes size quantity (a u64).

Ben,

Sorry for the delay in the response.

Some of the other properties are also of phys_addr_t, (e.g 
linux,crashkernel-base, linux,kernel-end ). Should we fix them as well ?


Or

Should we leave this also a phys_addr_t and let the userspace handle it ?




+
+   /* memory-limit is needed for constructing the crash regions */
+   prop = of_find_property(node, memory_limit_prop.name, NULL);
+   if (prop)
+   prom_remove_property(node, prop);
+
+   if (memory_limit)
+   prom_add_property(node, memory_limit_prop);
+


There's a patch floating around making prom_update_property properly
handle both pre-existing and non-pre-existing props, you should probably
base yourself on top of it. I'm about to stick that patch in powerpc
-next


OK. I am testing the new patch based on the above commit. I will wait
for the clarification on the issue of the type, before I post it here.

Thanks
Suzuki

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: 3.4.0-rc1: No init found

2012-07-05 Thread Suzuki K. Poulose

On 07/06/2012 04:06 AM, Tabi Timur-B04825 wrote:

On Wed, Apr 4, 2012 at 7:36 AM, Suzuki K. Poulose suz...@in.ibm.com wrote:


Not sure if this is related, but at the end of each kernel compilation,
the following messages are printed:


SYSMAP  System.map
SYSMAP  .tmp_System.map
WRAParch/powerpc/boot/zImage.pmac
INFO: Uncompressed kernel (size 0x6e52f8) overlaps the address of the
wrapper(0x40)
INFO: Fixing the link_address of wrapper to (0x70)
WRAParch/powerpc/boot/zImage.coff
INFO: Uncompressed kernel (size 0x6e52f8) overlaps the address of the
wrapper(0x50)
INFO: Fixing the link_address of wrapper to (0x70)
WRAParch/powerpc/boot/zImage.miboot
INFO: Uncompressed kernel (size 0x6d4b80) overlaps the address of the
wrapper(0x40)
INFO: Fixing the link_address of wrapper to (0x70)
Building modules, stage 2.
MODPOST 24 modules


I started to see these messages in January (around Linux 3.2.0), but never
investigated what it was since the produced kernels continued to boot just
fine.



The above change was added by me. The message is printed when the 'wrapper'
script finds that decompressed kernel overlaps the 'bootstrap code' which
does the decompression. So it shifts the 'address' of the bootstrap code to
the next higher MB. As such it is harmless.


I see this message every time when I build the kernel.  I know it's
harmless, but is this something that can be fixed?  That is, can we
change some linker script (or whatever) to make 0x70 the default
value?


You could do this by setting the link_address by checking your platform,
like some of the other platforms (e.g, pseries, ps3 ).

Or

we could add a parameter to the wrapper script to set the link_address ?

Ben, Josh,
What do you think ?

 Or maybe modify the wrapper script to just automatically find

the right spot without printing a message?


We need the message there for people who have restriction of a fixed 
link address. This would help them to handle the situation accordingly

than blindly failing on boot.

Thanks
Suzuki

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH] [powerpc] Export memory limit via device tree

2012-07-02 Thread Suzuki K. Poulose
The powerpc kernel doesn't export the memory limit enforced by 'mem='
kernel parameter. This is required for building the ELF header in
kexec-tools to limit the vmcore to capture only the used memory. On
powerpc the kexec-tools depends on the device-tree for memory related
information, unlike /proc/iomem on the x86.

Without this information, the kexec-tools assumes the entire System
RAM and vmcore creates an unnecessarily larger dump.

This patch exports the memory limit, if present, via chosen/linux,memory-limit
property, so that the vmcore can be limited to the memory limit.

The prom_init seems to export this value in the same node. But doesn't really
appear there.  Also the memory_limit gets adjusted with the processing of
crashkernel= parameter. This patch makes sure we get the actual limit.

The kexec-tools will use the value to limit the 'end' of the memory
regions.

Tested this patch on ppc64 and ppc32(ppc440) with a kexec-tools
patch by Mahesh.

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Tested-by: Mahesh J. Salgaonkar mah...@linux.vnet.ibm.com
---

 arch/powerpc/kernel/machine_kexec.c |   15 +++
 1 files changed, 15 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/kernel/machine_kexec.c 
b/arch/powerpc/kernel/machine_kexec.c
index c957b12..0c9695d 100644
--- a/arch/powerpc/kernel/machine_kexec.c
+++ b/arch/powerpc/kernel/machine_kexec.c
@@ -207,6 +207,12 @@ static struct property crashk_size_prop = {
.value = crashk_size,
 };
 
+static struct property memory_limit_prop = {
+   .name = linux,memory-limit,
+   .length = sizeof(phys_addr_t),
+   .value = memory_limit,
+};
+
 static void __init export_crashk_values(struct device_node *node)
 {
struct property *prop;
@@ -226,6 +232,15 @@ static void __init export_crashk_values(struct device_node 
*node)
crashk_size = resource_size(crashk_res);
prom_add_property(node, crashk_size_prop);
}
+
+   /* memory-limit is needed for constructing the crash regions */
+   prop = of_find_property(node, memory_limit_prop.name, NULL);
+   if (prop)
+   prom_remove_property(node, prop);
+
+   if (memory_limit)
+   prom_add_property(node, memory_limit_prop);
+
 }
 
 static int __init kexec_setup(void)

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH] [ppc] Do not reserve cpu spin-table for crash kernel

2012-06-18 Thread Suzuki K. Poulose

On 05/24/2012 11:39 AM, Suzuki K. Poulose wrote:

As of now, the kexec reserves the spin-table for all the CPUs
on an SMP machine. The spin-table is pointed to by the
cpu-release-addr property in the device-tree. Reserving the
spin-table in the crash kernel will cause a BUG(), if the table
lies outside the memory reserved for the crashkernel.

Disable reserving the spin-table regions and use maxcpus=1 to
use only the crashing CPU to boot the crash kernel.

Signed-off-by: Suzuki K. Poulosesuz...@in.ibm.com


Simon,

Any response on this one ?

I have tested this on a Currituck board (476, SMP) with a UP kernel.
Without this patch, the secondary kernel hits 'PANIC' in boot while
trying to reserve a memory(the spin table), outside the memory
range(crash reserve).


Thanks
Suzuki


---

  kexec/arch/ppc/crashdump-powerpc.c |   19 +--
  kexec/arch/ppc/fixup_dtb.c |4 
  2 files changed, 17 insertions(+), 6 deletions(-)

diff --git a/kexec/arch/ppc/crashdump-powerpc.c 
b/kexec/arch/ppc/crashdump-powerpc.c
index 1bef69b..4c8c75d 100644
--- a/kexec/arch/ppc/crashdump-powerpc.c
+++ b/kexec/arch/ppc/crashdump-powerpc.c
@@ -262,10 +262,19 @@ static void ulltoa(unsigned long long i, char *str)
}
  }

+/* Append str to cmdline */
+static void add_cmdline(char *cmdline, char *str)
+{
+   int cmdlen = strlen(cmdline) + strlen(str);
+   if (cmdlen  (COMMAND_LINE_SIZE - 1))
+   die(Command line overflow\n);
+   strcat(cmdline, str);
+}
+
  static int add_cmdline_param(char *cmdline, unsigned long long addr,
char *cmdstr, char *byte)
  {
-   int cmdlen, len, align = 1024;
+   int align = 1024;
char str[COMMAND_LINE_SIZE], *ptr;

/* Passing in =xxxK / =xxxM format. Saves space required in cmdline.*/
@@ -284,11 +293,8 @@ static int add_cmdline_param(char *cmdline, unsigned long 
long addr,
ptr += strlen(str);
ulltoa(addr, ptr);
strcat(str, byte);
-   len = strlen(str);
-   cmdlen = strlen(cmdline) + len;
-   if (cmdlen  (COMMAND_LINE_SIZE - 1))
-   die(Command line overflow\n);
-   strcat(cmdline, str);
+
+   add_cmdline(cmdline, str);

dbgprintf(Command line after adding elfcorehdr: %s\n, cmdline);

@@ -365,6 +371,7 @@ int load_crashdump_segments(struct kexec_info *info, char 
*mod_cmdline,
 */
add_cmdline_param(mod_cmdline, elfcorehdr,  elfcorehdr=, K);
add_cmdline_param(mod_cmdline, saved_max_mem,  savemaxmem=, M);
+   add_cmdline(mod_cmdline,  maxcpus=1);
return 0;
  }

diff --git a/kexec/arch/ppc/fixup_dtb.c b/kexec/arch/ppc/fixup_dtb.c
index e9890a4..f832026 100644
--- a/kexec/arch/ppc/fixup_dtb.c
+++ b/kexec/arch/ppc/fixup_dtb.c
@@ -172,6 +172,9 @@ static void fixup_reserve_regions(struct kexec_info *info, 
char *blob_buf)
}
}

+#if 0
+   /* XXX: Do not reserve spin-table for CPUs. */
+
/* Add reserve regions for cpu-release-addr */
nodeoffset = fdt_node_offset_by_prop_value(blob_buf, -1, device_type, 
cpu, 4);
while (nodeoffset != -FDT_ERR_NOTFOUND) {
@@ -201,6 +204,7 @@ static void fixup_reserve_regions(struct kexec_info *info, 
char *blob_buf)
nodeoffset = fdt_node_offset_by_prop_value(blob_buf, nodeoffset,
device_type, cpu, 4);
}
+#endif

  out:
print_fdt_reserve_regions(blob_buf);


___
kexec mailing list
ke...@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec



___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH] [ppc] Do not reserve cpu spin-table for crash kernel

2012-05-24 Thread Suzuki K. Poulose
As of now, the kexec reserves the spin-table for all the CPUs
on an SMP machine. The spin-table is pointed to by the 
cpu-release-addr property in the device-tree. Reserving the
spin-table in the crash kernel will cause a BUG(), if the table
lies outside the memory reserved for the crashkernel.

Disable reserving the spin-table regions and use maxcpus=1 to 
use only the crashing CPU to boot the crash kernel.

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
---

 kexec/arch/ppc/crashdump-powerpc.c |   19 +--
 kexec/arch/ppc/fixup_dtb.c |4 
 2 files changed, 17 insertions(+), 6 deletions(-)

diff --git a/kexec/arch/ppc/crashdump-powerpc.c 
b/kexec/arch/ppc/crashdump-powerpc.c
index 1bef69b..4c8c75d 100644
--- a/kexec/arch/ppc/crashdump-powerpc.c
+++ b/kexec/arch/ppc/crashdump-powerpc.c
@@ -262,10 +262,19 @@ static void ulltoa(unsigned long long i, char *str)
}
 }
 
+/* Append str to cmdline */
+static void add_cmdline(char *cmdline, char *str)
+{
+   int cmdlen = strlen(cmdline) + strlen(str);
+   if (cmdlen  (COMMAND_LINE_SIZE - 1))
+   die(Command line overflow\n);
+   strcat(cmdline, str);
+}
+
 static int add_cmdline_param(char *cmdline, unsigned long long addr,
char *cmdstr, char *byte)
 {
-   int cmdlen, len, align = 1024;
+   int align = 1024;
char str[COMMAND_LINE_SIZE], *ptr;
 
/* Passing in =xxxK / =xxxM format. Saves space required in cmdline.*/
@@ -284,11 +293,8 @@ static int add_cmdline_param(char *cmdline, unsigned long 
long addr,
ptr += strlen(str);
ulltoa(addr, ptr);
strcat(str, byte);
-   len = strlen(str);
-   cmdlen = strlen(cmdline) + len;
-   if (cmdlen  (COMMAND_LINE_SIZE - 1))
-   die(Command line overflow\n);
-   strcat(cmdline, str);
+
+   add_cmdline(cmdline, str);
 
dbgprintf(Command line after adding elfcorehdr: %s\n, cmdline);
 
@@ -365,6 +371,7 @@ int load_crashdump_segments(struct kexec_info *info, char 
*mod_cmdline,
 */
add_cmdline_param(mod_cmdline, elfcorehdr,  elfcorehdr=, K);
add_cmdline_param(mod_cmdline, saved_max_mem,  savemaxmem=, M);
+   add_cmdline(mod_cmdline,  maxcpus=1);
return 0;
 }
 
diff --git a/kexec/arch/ppc/fixup_dtb.c b/kexec/arch/ppc/fixup_dtb.c
index e9890a4..f832026 100644
--- a/kexec/arch/ppc/fixup_dtb.c
+++ b/kexec/arch/ppc/fixup_dtb.c
@@ -172,6 +172,9 @@ static void fixup_reserve_regions(struct kexec_info *info, 
char *blob_buf)
}
}
 
+#if 0
+   /* XXX: Do not reserve spin-table for CPUs. */
+
/* Add reserve regions for cpu-release-addr */
nodeoffset = fdt_node_offset_by_prop_value(blob_buf, -1, device_type, 
cpu, 4);
while (nodeoffset != -FDT_ERR_NOTFOUND) {
@@ -201,6 +204,7 @@ static void fixup_reserve_regions(struct kexec_info *info, 
char *blob_buf)
nodeoffset = fdt_node_offset_by_prop_value(blob_buf, nodeoffset,
device_type, cpu, 4);
}
+#endif
 
 out:
print_fdt_reserve_regions(blob_buf);

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Handling spin table in kdump

2012-05-22 Thread Suzuki K. Poulose

Hi

I came across the following issue while testing Kdump on an SMP 
board(Currituck) running a non-SMP kernel. Even though the kernel is UP,

the device-tree has the nodes for second CPU and the related details.


The kexec tool adds the spin table area as a reserved section in the 
device tree for the dump capture kernel. This value is read from the 
'cpu-release-addr'.


But now, if the spin table is not located within the 'Reserved region' 
for the crash kernel, the dump capture kernel would fail to boot, 
hitting a BUG in mm/bootmem.c as in [1].


This is because we try to reserve a region which is not available to the 
kernel.


So I am wondering how is this handled really on an SMP board (Fsl_bookE).

There are two possible solutions :
1) Do not reserve the regions for the spin-table, as we will use
only the crashing CPU in the second kernel(maxcpus=1).


2) Add the spin-table region to the available memory regions passed
to the kernel by kexec-tools.

I have tested (1) and it works fine for me. Yet to test (2).


Thoughts ?


Thanks
Suzuki



[1] Kernel Bug



Linux version 3.3.0-rc5 (r...@suzukikp.in.ibm.com) (gcc version 4.3.4 
[gcc-4_3-branch revision 152973] (GCC) ) #12 Tue May 22 18:03:01 IST2

Found legacy serial port 0 for /plb/opb/serial@1000
  mem=2001000, taddr=2001000, irq=0, clk=1851851, speed=115200
[ cut here ]
kernel BUG at mm/bootmem.c:351!
Vector: 700 (Program Check) at [c8a61e90]
pc: c847f91c: mark_bootmem+0xa0/0x14c
lr: c8472670: do_init_bootmem+0x1ac/0x218
sp: c8a61f40
   msr: 21000
  current = 0xc8a4a500
pid   = 0, comm = swapper
kernel BUG at mm/bootmem.c:351!
enter ? for help
[c8a61f70] c8472670 do_init_bootmem+0x1ac/0x218
[c8a61f90] c847025c setup_arch+0x1bc/0x234
[c8a61fb0] c846b62c start_kernel+0x98/0x358
[c8a61ff0] c8b4 _start+0xb4/0xf8

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 0/2] Kdump support for 47x

2012-04-25 Thread Suzuki K. Poulose

On 04/16/2012 01:56 PM, Suzuki K. Poulose wrote:

The following series implements Kexec/Kdump support for
PPC_47x based platforms. Doesn't support SMP yet.

I have tested these patches on the following simulators:
 1) simics
 2) IBM ISS for ppc476.

Changes since V1:
  * Initialize the SPRN_PID to kernel pid (0) before the TLB operations in
setup_map_47x




Josh,

Did you get a chance to look at this ?

Thanks

Suzuki


---

Suzuki K. Poulose (2):
   [47x] Enable CRASH_DUMP
   [47x] Kernel support for KEXEC


  arch/powerpc/Kconfig  |4 -
  arch/powerpc/kernel/misc_32.S |  195 -
  2 files changed, 191 insertions(+), 8 deletions(-)



___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH] [44x][KEXEC] Fix/Initialize PID to kernel PID before the TLB search

2012-04-16 Thread Suzuki K. Poulose
Initialize the PID register with kernel pid (0) before we start
setting the TLB mapping for KEXEC. Also set the MMUCR[TID] to kernel
PID.

This was spotted while testing the kexec on ISS for 47x. ISS  doesn't
return a successful tlbsx for a kernel address with PID set to a user PID.
Though the hardware/qemu/simics work fine.

This patch is harmless and initializes the PID to 0 (kernel PID) which
is usually the case during a normal kernel boot. This would fix the kexec
on ISS for 440. I have tested this patch on sequoia board.

Signed-off-by: Suzuki K Poulose suz...@in.ibm.com
Cc: Josh Boyer jwbo...@us.ibm.com
---

 arch/powerpc/kernel/misc_32.S |8 ++--
 1 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kernel/misc_32.S b/arch/powerpc/kernel/misc_32.S
index 7cd07b4..d7e05d2 100644
--- a/arch/powerpc/kernel/misc_32.S
+++ b/arch/powerpc/kernel/misc_32.S
@@ -761,8 +761,12 @@ relocate_new_kernel:
mr  r30, r4
mr  r31, r5
 
-   /* Load our MSR_IS and TID to MMUCR for TLB search */
-   mfspr   r3,SPRN_PID
+   /* 
+* Load the PID with kernel PID (0).
+* Also load our MSR_IS and TID to MMUCR for TLB search.
+*/
+   li  r3, 0
+   mtspr   SPRN_PID, r3
mfmsr   r4
andi.   r4,r4,MSR_IS@l
beq wmmucr

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH 0/2] Kdump support for 47x

2012-04-16 Thread Suzuki K. Poulose
The following series implements Kexec/Kdump support for
PPC_47x based platforms. Doesn't support SMP yet.

I have tested these patches on the following simulators:
1) simics
2) IBM ISS for ppc476.

Changes since V1:
 * Initialize the SPRN_PID to kernel pid (0) before the TLB operations in
   setup_map_47x


---

Suzuki K. Poulose (2):
  [47x] Enable CRASH_DUMP
  [47x] Kernel support for KEXEC


 arch/powerpc/Kconfig  |4 -
 arch/powerpc/kernel/misc_32.S |  195 -
 2 files changed, 191 insertions(+), 8 deletions(-)

-- 
Suzuki K. Poulose

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH 1/2] [47x] Kernel support for KEXEC

2012-04-16 Thread Suzuki K. Poulose
This patch adds support for creating 1:1 mapping for the
PPC_47x during a KEXEC. The implementation is similar
to that of the PPC440x which is described here :

http://patchwork.ozlabs.org/patch/104323/

PPC_47x MMU :

The 47x uses Unified TLB 1024 entries, with 4-way associative
mapping (4 x 256 entries). The index to be used is calculated
by the MMU by hashing the PID, EPN and TS. The software can
choose to specify the way by setting bit 0(enable way select)
 and the way in bits 1-2 in the TLB Word 0.

Implementation:

The patch erases all the UTLB entries which includes the tlb
covering the mapping for our code. The shadow TLB caches the
mapping for the running code which helps us to continue the
execution until we do isync/rfi. We then create a tmp mapping
for the current code in the other address space (TS) and switch
to it.

Then we create a 1:1 mapping(EPN=RPN) for 0-2GiB in the original
address space and switch to the new mapping.

TODO: Add SMP support.

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
---

 arch/powerpc/Kconfig  |2 
 arch/powerpc/kernel/misc_32.S |  195 -
 2 files changed, 190 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 613eacf..4f64860 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -351,7 +351,7 @@ config ARCH_ENABLE_MEMORY_HOTREMOVE
 
 config KEXEC
bool kexec system call (EXPERIMENTAL)
-   depends on (PPC_BOOK3S || FSL_BOOKE || (44x  !SMP  !PPC_47x))  
EXPERIMENTAL
+   depends on (PPC_BOOK3S || FSL_BOOKE || (44x  !SMP))  EXPERIMENTAL
help
  kexec is a system call that implements the ability to shutdown your
  current kernel, and to start another kernel.  It is like a reboot
diff --git a/arch/powerpc/kernel/misc_32.S b/arch/powerpc/kernel/misc_32.S
index d7e05d2..386d57f 100644
--- a/arch/powerpc/kernel/misc_32.S
+++ b/arch/powerpc/kernel/misc_32.S
@@ -738,8 +738,23 @@ relocate_new_kernel:
mr  r5, r31
 
li  r0, 0
-#elif defined(CONFIG_44x)   !defined(CONFIG_PPC_47x)
+#elif defined(CONFIG_44x)
 
+   /* Save our parameters */
+   mr  r29, r3
+   mr  r30, r4
+   mr  r31, r5
+
+#ifdef CONFIG_PPC_47x
+   /* Check for 47x cores */
+   mfspr   r3,SPRN_PVR
+   srwir3,r3,16
+   cmplwi  cr0,r3,PVR_476@h
+   beq setup_map_47x
+   cmplwi  cr0,r3,PVR_476_ISS@h
+   beq setup_map_47x
+#endif /* CONFIG_PPC_47x */
+   
 /*
  * Code for setting up 1:1 mapping for PPC440x for KEXEC
  *
@@ -753,13 +768,8 @@ relocate_new_kernel:
  * 5) Invalidate the tmp mapping.
  *
  * - Based on the kexec support code for FSL BookE
- * - Doesn't support 47x yet.
  *
  */
-   /* Save our parameters */
-   mr  r29, r3
-   mr  r30, r4
-   mr  r31, r5
 
/* 
 * Load the PID with kernel PID (0).
@@ -904,6 +914,179 @@ next_tlb:
li  r3, 0
tlbwe   r3, r24, PPC44x_TLB_PAGEID
sync
+   b   ppc44x_map_done
+
+#ifdef CONFIG_PPC_47x
+
+   /* 1:1 mapping for 47x */
+
+setup_map_47x:
+
+   /*
+* Load the kernel pid (0) to PID and also to MMUCR[TID].
+* Also set the MSR IS-MMUCR STS
+*/
+   li  r3, 0
+   mtspr   SPRN_PID, r3/* Set PID */
+   mfmsr   r4  /* Get MSR */
+   andi.   r4, r4, MSR_IS@l/* TS=1? */
+   beq 1f  /* If not, leave STS=0 */
+   orisr3, r3, PPC47x_MMUCR_STS@h  /* Set STS=1 */
+1: mtspr   SPRN_MMUCR, r3  /* Put MMUCR */
+   sync
+
+   /* Find the entry we are running from */
+   bl  2f
+2: mflrr23
+   tlbsx   r23, 0, r23
+   tlbre   r24, r23, 0 /* TLB Word 0 */
+   tlbre   r25, r23, 1 /* TLB Word 1 */
+   tlbre   r26, r23, 2 /* TLB Word 2 */
+
+
+   /*
+* Invalidates all the tlb entries by writing to 256 RPNs(r4)
+* of 4k page size in all  4 ways (0-3 in r3).
+* This would invalidate the entire UTLB including the one we are
+* running from. However the shadow TLB entries would help us 
+* to continue the execution, until we flush them (rfi/isync).
+*/
+   addis   r3, 0, 0x8000   /* specify the way */
+   addir4, 0, 0/* TLB Word0 = (EPN=0, VALID = 
0) */
+   addir5, 0, 0
+   b   clear_utlb_entry
+
+   /* Align the loop to speed things up. from head_44x.S */
+   .align  6
+
+clear_utlb_entry:
+
+   tlbwe   r4, r3, 0
+   tlbwe   r5, r3, 1
+   tlbwe   r5, r3, 2
+   addis   r3, r3, 0x2000  /* Increment the way */
+   cmpwi   r3, 0
+   bne clear_utlb_entry
+   addis   r3, 0, 0x8000
+   addis   r4, r4, 0x100

[PATCH 2/2] [47x] Enable CRASH_DUMP

2012-04-16 Thread Suzuki K. Poulose
Now that we have KEXEC and relocatable kernel working on 47x (!SMP)
enable CRASH_DUMP.

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
---

 arch/powerpc/Kconfig |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 4f64860..629543a 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -368,7 +368,7 @@ config KEXEC
 
 config CRASH_DUMP
bool Build a kdump crash kernel
-   depends on PPC64 || 6xx || FSL_BOOKE || (44x  !SMP  !PPC_47x)
+   depends on PPC64 || 6xx || FSL_BOOKE || (44x  !SMP)
select RELOCATABLE if PPC64 || 44x
select DYNAMIC_MEMSTART if FSL_BOOKE
help

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: 3.4.0-rc1: No init found

2012-04-04 Thread Suzuki K. Poulose

On 04/03/2012 10:48 PM, Christian Kujau wrote:

On Tue, 3 Apr 2012 at 18:08, Benjamin Herrenschmidt wrote:

I have observed this randomly on the G5 ... sometimes, if I try again,
it works... it's very very odd. There is some kind of race maybe with
async startup ? Or a problem with the vfs path walking ? It's certainly
not easily reproducable for me, it goes away from one boot to the next.


It's 100% reproducible for me. This PowerBook G4 (1.25Ghz) is not the
fastes though, maybe a race triggers more easily here...?


PS: Unfortunately I cannot boot into the old (3.3-rc7) kernel
 right now (which is still installed via yaboot and present in
 /boot), because of this:
 http://nerdbynature.de/bits/3.4.0-rc1/init/mac-invalid-memory.JPG
 Booting into Debian's squeeze kernel (2.6.32) which resides in
 the same /boot directory succeeds.


Hrm, did it used to boot ?


I'm using the backup kernel only when the new one has an issue, so I
have not tested it for a while, but it used to work, for sure.


Can you do printenv in OF and tell me what
your load-base, real-base, virt-base etc... are ?


load-base is 0x80, real-base and virt-base is set to -1, please see
http://nerdbynature.de/bits/3.4.0-rc1/init/printenv-1.JPG

Not sure if this is related, but at the end of each kernel compilation,
the following messages are printed:


   SYSMAP  System.map
   SYSMAP  .tmp_System.map
   WRAParch/powerpc/boot/zImage.pmac
INFO: Uncompressed kernel (size 0x6e52f8) overlaps the address of the 
wrapper(0x40)
INFO: Fixing the link_address of wrapper to (0x70)
   WRAParch/powerpc/boot/zImage.coff
INFO: Uncompressed kernel (size 0x6e52f8) overlaps the address of the 
wrapper(0x50)
INFO: Fixing the link_address of wrapper to (0x70)
   WRAParch/powerpc/boot/zImage.miboot
INFO: Uncompressed kernel (size 0x6d4b80) overlaps the address of the 
wrapper(0x40)
INFO: Fixing the link_address of wrapper to (0x70)
   Building modules, stage 2.
   MODPOST 24 modules


I started to see these messages in January (around Linux 3.2.0), but never
investigated what it was since the produced kernels continued to boot just
fine.


The above change was added by me. The message is printed when the 
'wrapper' script finds that decompressed kernel overlaps the 'bootstrap 
code' which does the decompression. So it shifts the 'address' of the 
bootstrap code to the next higher MB. As such it is harmless.



Thanks
Suzuki

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 0/2] Kdump support for PPC_47x

2012-03-15 Thread Suzuki K. Poulose

On 03/15/2012 11:41 AM, Tony Breeds wrote:

On Wed, Mar 14, 2012 at 03:52:30PM +0530, Suzuki K. Poulose wrote:

The following series implements Kexec/Kdump support for
PPC_47x based platforms. Doesn't support SMP yet.

I have tested these patches on simics simulator for ppc476.


I'll test these patches on the currituck board I have here early next
week.


Thanks a lot Tony !

Suzuki

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH 0/2] Kdump support for PPC_47x

2012-03-14 Thread Suzuki K. Poulose
The following series implements Kexec/Kdump support for
PPC_47x based platforms. Doesn't support SMP yet.

I have tested these patches on simics simulator for ppc476.

---

Suzuki K. Poulose (2):
  [47x] Enable CRASH_DUMP
  [47x] Kernel support for KEXEC


 arch/powerpc/Kconfig  |4 -
 arch/powerpc/kernel/misc_32.S |  197 -
 2 files changed, 193 insertions(+), 8 deletions(-)

-- 
Suzuki K. Poulose

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH 2/2] [47x] Enable CRASH_DUMP

2012-03-14 Thread Suzuki K. Poulose
Now that we have KEXEC and relocatable kernel working on 47x (!SMP)
enable CRASH_DUMP.

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
---

 arch/powerpc/Kconfig |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 975aae5..10070d2 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -363,7 +363,7 @@ config KEXEC
 
 config CRASH_DUMP
bool Build a kdump crash kernel
-   depends on PPC64 || 6xx || FSL_BOOKE || (44x  !SMP  !PPC_47x)
+   depends on PPC64 || 6xx || FSL_BOOKE || (44x  !SMP)
select RELOCATABLE if PPC64 || 44x
select DYNAMIC_MEMSTART if FSL_BOOKE
help

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH 1/2] [47x] Kernel support for KEXEC

2012-03-14 Thread Suzuki K. Poulose
This patch adds support for creating 1:1 mapping for the
PPC_47x during a KEXEC. The implementation is similar
to that of the PPC440x which is described here :

http://patchwork.ozlabs.org/patch/104323/

PPC_47x MMU :

The 47x uses Unified TLB 1024 entries, with 4-way associative
mapping (4 x 256 entries). The index to be used is calculated
by the MMU by hashing the PID, EPN and TS. The software can
choose to specify the way by setting bit 0(enable way select)
 and the way in bits 1-2 in the TLB Word 0.

Implementation:

The patch erases all the UTLB entries which includes the tlb
covering the mapping for our code. The shadow TLB caches the
mapping for the running code which helps us to continue the
execution until we do isync/rfi. We then create a tmp mapping
for the current code in the other address space (TS) and switch
to it.

Then we create a 1:1 mapping(EPN=RPN) for 0-2GiB in the original
address space and switch to the new mapping.

TODO: Add SMP support.

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
---

 arch/powerpc/Kconfig  |2 
 arch/powerpc/kernel/misc_32.S |  197 -
 2 files changed, 192 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index fe56229..975aae5 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -346,7 +346,7 @@ config ARCH_ENABLE_MEMORY_HOTREMOVE
 
 config KEXEC
bool kexec system call (EXPERIMENTAL)
-   depends on (PPC_BOOK3S || FSL_BOOKE || (44x  !SMP  !PPC_47x))  
EXPERIMENTAL
+   depends on (PPC_BOOK3S || FSL_BOOKE || (44x  !SMP))  EXPERIMENTAL
help
  kexec is a system call that implements the ability to shutdown your
  current kernel, and to start another kernel.  It is like a reboot
diff --git a/arch/powerpc/kernel/misc_32.S b/arch/powerpc/kernel/misc_32.S
index 7cd07b4..3e7154b 100644
--- a/arch/powerpc/kernel/misc_32.S
+++ b/arch/powerpc/kernel/misc_32.S
@@ -738,8 +738,23 @@ relocate_new_kernel:
mr  r5, r31
 
li  r0, 0
-#elif defined(CONFIG_44x)   !defined(CONFIG_PPC_47x)
+#elif defined(CONFIG_44x)
 
+   /* Save our parameters */
+   mr  r29, r3
+   mr  r30, r4
+   mr  r31, r5
+
+#ifdef CONFIG_PPC_47x
+   /* Check for 47x cores */
+   mfspr   r3,SPRN_PVR
+   srwir3,r3,16
+   cmplwi  cr0,r3,PVR_476@h
+   beq setup_map_47x
+   cmplwi  cr0,r3,PVR_476_ISS@h
+   beq setup_map_47x
+#endif /* CONFIG_PPC_47x */
+   
 /*
  * Code for setting up 1:1 mapping for PPC440x for KEXEC
  *
@@ -753,13 +768,8 @@ relocate_new_kernel:
  * 5) Invalidate the tmp mapping.
  *
  * - Based on the kexec support code for FSL BookE
- * - Doesn't support 47x yet.
  *
  */
-   /* Save our parameters */
-   mr  r29, r3
-   mr  r30, r4
-   mr  r31, r5
 
/* Load our MSR_IS and TID to MMUCR for TLB search */
mfspr   r3,SPRN_PID
@@ -900,6 +910,181 @@ next_tlb:
li  r3, 0
tlbwe   r3, r24, PPC44x_TLB_PAGEID
sync
+   b   ppc44x_map_done
+
+#ifdef CONFIG_PPC_47x
+
+   /* 1:1 mapping for 47x */
+
+setup_map_47x:
+
+   /* Load our current PID-MMUCR TID and MSR IS-MMUCR STS */
+   mfspr   r3, SPRN_PID/* Get PID */
+   mfmsr   r4  /* Get MSR */
+   andi.   r4, r4, MSR_IS@l/* TS=1? */
+   beq 1f  /* If not, leave STS=0 */
+   orisr3, r3, PPC47x_MMUCR_STS@h  /* Set STS=1 */
+1: mtspr   SPRN_MMUCR, r3  /* Put MMUCR */
+   sync
+
+   /* Find the entry we are running from */
+   bl  2f
+2: mflrr23
+   tlbsx   r23, 0, r23
+   tlbre   r24, r23, 0 /* TLB Word 0 */
+   tlbre   r25, r23, 1 /* TLB Word 1 */
+   tlbre   r26, r23, 2 /* TLB Word 2 */
+
+
+   /* Initialize MMUCR */
+   li  r5, 0
+   mtspr   SPRN_MMUCR, r5
+   sync
+
+
+   /*
+* Invalidates all the tlb entries by writing to 256 RPNs(r4)
+* of 4k page size in all  4 ways (0-3 in r3).
+* This would invalidate the entire UTLB including the one we are
+* running from. However the shadow TLB entries would help us 
+* to continue the execution, until we flush them (rfi/isync).
+*/
+   addis   r3, 0, 0x8000   /* specify the way */
+   addir4, 0, 0/* TLB Word0 = (EPN=0, VALID = 
0) */
+   addir5, 0, 0
+   b   clear_utlb_entry
+
+   /* Align the loop to speed things up. from head_44x.S */
+   .align  6
+
+clear_utlb_entry:
+
+   tlbwe   r4, r3, 0
+   tlbwe   r5, r3, 1
+   tlbwe   r5, r3, 2
+   addis   r3, r3, 0x2000  /* Increment the way */
+   cmpwi   r3, 0
+   bne clear_utlb_entry
+   addis   r3, 0, 0x8000

Re: [PATCH 0/2] Kdump support for PPC_47x

2012-03-14 Thread Suzuki K. Poulose

On 03/15/2012 12:27 AM, Josh Boyer wrote:

On Wed, Mar 14, 2012 at 6:22 AM, Suzuki K. Poulosesuz...@in.ibm.com  wrote:

The following series implements Kexec/Kdump support for
PPC_47x based platforms. Doesn't support SMP yet.

I have tested these patches on simics simulator for ppc476.


Do you happen to know if these work in the IBM Instruction Set Simulator for
47x?  That would be the only commonly available 476 platform that I'm aware
of.
I haven't tested it on IBM ISS for 47x. However the code is similar that 
we have in the boot map setup. I will see if I can get access to one and 
test it there.


Thanks
Suzuki

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH v1 0/4][makedumpfile] vmalloc translation support for PPC32

2012-02-20 Thread Suzuki K. Poulose

On 02/20/2012 03:26 PM, Atsushi Kumagai wrote:

Hi, Benjamin
Hi, Suzuki

On Fri, 17 Feb 2012 19:39:29 +1100
Benjamin Herrenschmidtb...@kernel.crashing.org  wrote:


On Fri, 2012-02-17 at 11:25 +0530, Suzuki K. Poulose wrote:

Could you tell me what kind of data is stored in vmalloc region in

PPC ?

I want to estimate importance of your patches for makedumpfile.

I know at least the modules are loaded in the vmalloc'd region. I have
Cc'ed linux-ppc dev. We should be able to get enough info from the
experts here.

Josh / Kumar / Others,

Could you please let us know your thoughts ?


Modules, driver IO mappings, etc... I can see that being useful for
crashdumps.


Thank you for your information.

The above data may be required for one function of makedumpfile (filter
out kernel data) but not so crucial for makedumpfile as page descriptors
and related data(e.g. pglist_data).

Moreover, I'm preparing the release of v1.4.3 now, so I'll merge vmalloc
support for PPC32 into v1.4.4. Is it all right for you, Suzuki ?


Yep,  fine with me.

Thanks
Suzuki

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH v1 0/4][makedumpfile] vmalloc translation support for PPC32

2012-02-16 Thread Suzuki K. Poulose

On 02/17/2012 06:02 AM, Atsushi Kumagai wrote:

Hi, Suzuki

On Thu, 16 Feb 2012 09:05:17 +0530
Suzuki K. Poulosesuz...@in.ibm.com  wrote:


The series introduces an infrastructure to define platform specific
bits for page translation for PPC and PPC44x support for vmalloc
translation.

This is similar to what we have implemented for Crash-utility.

The patches are based makedumpfile-1.4.2 + PPC32 support patches
which is queued in for 1.4.3.

---

Suzuki K. Poulose (4):
   [makedumpfile][ppc] PPC44x page translation definitions
   [makedumpfile][ppc] Define platform descriptors for page translation
   [makedumpfile][ppc] Generic vmalloc translation support
   [makedumpfile] Add read_string routine


  arch/ppc.c |  108 ++--
  makedumpfile.c |   14 +++
  makedumpfile.h |   21 +++
  3 files changed, 139 insertions(+), 4 deletions(-)

--
Suzuki Poulose


Could you tell me what kind of data is stored in vmalloc region in PPC ?
I want to estimate importance of your patches for makedumpfile.

I know at least the modules are loaded in the vmalloc'd region. I have
Cc'ed linux-ppc dev. We should be able to get enough info from the experts here.

Josh / Kumar / Others,

Could you please let us know your thoughts ?

Thanks

Suzuki

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH] [boot] Change the WARN to INFO for boot wrapper overlap message

2011-12-20 Thread Suzuki K. Poulose
commit c55aef0e5bc6 (powerpc/boot: Change the load address
for the wrapper to fit the kernel) introduced a WARNING to
inform the user that the uncompressed kernel would overlap
the boot uncompressing wrapper code. Change it to an INFO.

I initially thought, this would be a 'WARNING' for the those
boards, where the link_address should be fixed, so that the
user can take actions accordingly.

Changing the same to INFO.

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
---

 arch/powerpc/boot/wrapper |4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/boot/wrapper b/arch/powerpc/boot/wrapper
index c8d6aaf..2b171cd 100755
--- a/arch/powerpc/boot/wrapper
+++ b/arch/powerpc/boot/wrapper
@@ -282,9 +282,9 @@ round_size=0x$(printf %x $round_size)
 link_addr=$(printf %d $link_address)
 
 if [ $link_addr -lt $strip_size ]; then
-echo WARN: Uncompressed kernel (size 0x$(printf %x\n $strip_size)) \
+echo INFO: Uncompressed kernel (size 0x$(printf %x\n $strip_size)) \
overlaps the address of the wrapper($link_address)
-echo WARN: Fixing the link_address of wrapper to ($round_size)
+echo INFO: Fixing the link_address of wrapper to ($round_size)
 link_address=$round_size
 fi
 

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH v5 0/7] Kudmp support for PPC440x

2011-12-15 Thread Suzuki K. Poulose
The following series implements:

 * Generic framework for relocatable kernel on PPC32, based on processing 
   the dynamic relocation entries.
 * Relocatable kernel support for 44x
 * Kdump support for 44x. Doesn't support 47x yet, as the kexec 
   support is missing.

Changes from V4:

 (Suggested by : Segher Boessenkool seg...@kernel.crashing.org )
 * Added 'sync' between dcbst and icbi for the modified instruction in
   relocate().
 * Better comments on register usage in reloc_32.S
 * Better check for relocation types in relocs_check.pl.
 
Changes from V3:

 * Added a new config - NONSTATIC_KERNEL - to group different types of 
relocatable
   kernel. (Suggested by: Josh Boyer)
 * Added supported ppc relocation types in relocs_check.pl for verifying the
   relocations used in the kernel.

Changes from V2:

 * Renamed old style mapping based RELOCATABLE on BookE to DYNAMIC_MEMSTART.
   Suggested by: Scott Wood
 * Added support for DYNAMIC_MEMSTART on PPC440x
 * Reverted back to RELOCATABLE and RELOCATABLE_PPC32 from RELOCATABLE_PPC32_PIE
   for relocation based on processing dynamic reloc entries for PPC32.
 * Ensure the modified instructions are flushed and the i-cache invalidated at
   the end of relocate(). - Reported by : Josh Poimboeuf

Changes from V1:

 * Splitted patch 'Enable CONFIG_RELOCATABLE for PPC44x' to move some
   of the generic bits to a new patch.
 * Renamed RELOCATABLE_PPC32 to RELOCATABLE_PPC32_PIE and provided options to
   retained old style mapping. (Suggested by: Scott Wood)
 * Added support for avoiding the overlapping of uncompressed kernel
   with boot wrapper for PPC images.

The patches are based on -next tree for ppc.

I have tested these patches on Ebony, Sequoia and Virtex(QEMU Emulated).
I haven't tested the RELOCATABLE bits on PPC_47x yet, as I don't have access
to one. However, RELOCATABLE should work fine there as we only depend on the 
runtime address and the XLAT entry setup by the boot loader. It would be great 
if
somebody could test these patches on a 47x.

---

Suzuki K. Poulose (7):
  [boot] Change the load address for the wrapper to fit the kernel
  [44x] Enable CRASH_DUMP for 440x
  [44x] Enable CONFIG_RELOCATABLE for PPC44x
  [ppc] Define virtual-physical translations for RELOCATABLE
  [ppc] Process dynamic relocations for kernel
  [44x] Enable DYNAMIC_MEMSTART for 440x
  [booke] Rename mapping based RELOCATABLE to DYNAMIC_MEMSTART for BookE


 arch/powerpc/Kconfig  |   45 -
 arch/powerpc/Makefile |6 -
 arch/powerpc/boot/wrapper |   20 ++
 arch/powerpc/configs/44x/iss476-smp_defconfig |3 
 arch/powerpc/include/asm/kdump.h  |4 
 arch/powerpc/include/asm/page.h   |   89 ++-
 arch/powerpc/kernel/Makefile  |2 
 arch/powerpc/kernel/crash_dump.c  |4 
 arch/powerpc/kernel/head_44x.S|  105 +
 arch/powerpc/kernel/head_fsl_booke.S  |2 
 arch/powerpc/kernel/machine_kexec.c   |2 
 arch/powerpc/kernel/prom_init.c   |2 
 arch/powerpc/kernel/reloc_32.S|  208 +
 arch/powerpc/kernel/vmlinux.lds.S |8 +
 arch/powerpc/mm/44x_mmu.c |2 
 arch/powerpc/mm/init_32.c |7 +
 arch/powerpc/relocs_check.pl  |   14 +-
 17 files changed, 495 insertions(+), 28 deletions(-)
 create mode 100644 arch/powerpc/kernel/reloc_32.S

--
Suzuki K. Poulose

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH v5 1/7] [booke] Rename mapping based RELOCATABLE to DYNAMIC_MEMSTART for BookE

2011-12-15 Thread Suzuki K. Poulose
The current implementation of CONFIG_RELOCATABLE in BookE is based
on mapping the page aligned kernel load address to KERNELBASE. This
approach however is not enough for platforms, where the TLB page size
is large (e.g, 256M on 44x). So we are renaming the RELOCATABLE used
currently in BookE to DYNAMIC_MEMSTART to reflect the actual method.

The CONFIG_RELOCATABLE for PPC32(BookE) based on processing of the
dynamic relocations will be introduced in the later in the patch series.

This change would allow the use of the old method of RELOCATABLE for
platforms which can afford to enforce the page alignment (platforms with
smaller TLB size).

Changes since v3:

* Introduced a new config, NONSTATIC_KERNEL, to denote a kernel which is
  either a RELOCATABLE or DYNAMIC_MEMSTART(Suggested by: Josh Boyer)

Suggested-by: Scott Wood scottw...@freescale.com
Tested-by: Scott Wood scottw...@freescale.com

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Cc: Scott Wood scottw...@freescale.com
Cc: Kumar Gala ga...@kernel.crashing.org
Cc: Josh Boyer jwbo...@gmail.com
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: linux ppc dev linuxppc-dev@lists.ozlabs.org
---

 arch/powerpc/Kconfig  |   60 +
 arch/powerpc/configs/44x/iss476-smp_defconfig |3 +
 arch/powerpc/include/asm/kdump.h  |4 +-
 arch/powerpc/include/asm/page.h   |4 +-
 arch/powerpc/kernel/crash_dump.c  |4 +-
 arch/powerpc/kernel/head_44x.S|4 +-
 arch/powerpc/kernel/head_fsl_booke.S  |2 -
 arch/powerpc/kernel/machine_kexec.c   |2 -
 arch/powerpc/kernel/prom_init.c   |2 -
 arch/powerpc/mm/44x_mmu.c |2 -
 10 files changed, 56 insertions(+), 31 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 7c93c7e..fac92ce 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -364,7 +364,8 @@ config KEXEC
 config CRASH_DUMP
bool Build a kdump crash kernel
depends on PPC64 || 6xx || FSL_BOOKE
-   select RELOCATABLE if PPC64 || FSL_BOOKE
+   select RELOCATABLE if PPC64
+   select DYNAMIC_MEMSTART if FSL_BOOKE
help
  Build a kernel suitable for use as a kdump capture kernel.
  The same kernel binary can be used as production kernel and dump
@@ -773,6 +774,10 @@ source drivers/rapidio/Kconfig
 
 endmenu
 
+config NONSTATIC_KERNEL
+   bool
+   default n
+
 menu Advanced setup
depends on PPC32
 
@@ -822,23 +827,39 @@ config LOWMEM_CAM_NUM
int Number of CAMs to use to map low memory if LOWMEM_CAM_NUM_BOOL
default 3
 
-config RELOCATABLE
-   bool Build a relocatable kernel (EXPERIMENTAL)
+config DYNAMIC_MEMSTART
+   bool Enable page aligned dynamic load address for kernel 
(EXPERIMENTAL)
depends on EXPERIMENTAL  ADVANCED_OPTIONS  FLATMEM  (FSL_BOOKE || 
PPC_47x)
-   help
- This builds a kernel image that is capable of running at the
- location the kernel is loaded at (some alignment restrictions may
- exist).
-
- One use is for the kexec on panic case where the recovery kernel
- must live at a different physical address than the primary
- kernel.
-
- Note: If CONFIG_RELOCATABLE=y, then the kernel runs from the address
- it has been loaded at and the compile time physical addresses
- CONFIG_PHYSICAL_START is ignored.  However CONFIG_PHYSICAL_START
- setting can still be useful to bootwrappers that need to know the
- load location of the kernel (eg. u-boot/mkimage).
+   select NONSTATIC_KERNEL
+   help
+ This option enables the kernel to be loaded at any page aligned
+ physical address. The kernel creates a mapping from KERNELBASE to 
+ the address where the kernel is loaded. The page size here implies
+ the TLB page size of the mapping for kernel on the particular 
platform.
+ Please refer to the init code for finding the TLB page size.
+
+ DYNAMIC_MEMSTART is an easy way of implementing pseudo-RELOCATABLE
+ kernel image, where the only restriction is the page aligned kernel
+ load address. When this option is enabled, the compile time physical 
+ address CONFIG_PHYSICAL_START is ignored.
+
+# Mapping based RELOCATABLE is moved to DYNAMIC_MEMSTART
+# config RELOCATABLE
+#  bool Build a relocatable kernel (EXPERIMENTAL)
+#  depends on EXPERIMENTAL  ADVANCED_OPTIONS  FLATMEM  (FSL_BOOKE || 
PPC_47x)
+#  help
+#This builds a kernel image that is capable of running at the
+#location the kernel is loaded at, without any alignment restrictions.
+#
+#One use is for the kexec on panic case where the recovery kernel
+#must live at a different physical address than the primary
+#kernel.
+#
+#Note: If CONFIG_RELOCATABLE=y, then the kernel runs from

[PATCH v5 4/7] [ppc] Define virtual-physical translations for RELOCATABLE

2011-12-15 Thread Suzuki K. Poulose
We find the runtime address of _stext and relocate ourselves based
on the following calculation.

virtual_base = ALIGN(KERNELBASE,KERNEL_TLB_PIN_SIZE) +
MODULO(_stext.run,KERNEL_TLB_PIN_SIZE)

relocate() is called with the Effective Virtual Base Address (as
shown below)

| Phys. Addr| Virt. Addr |
Page||
Boundary|   ||
|   ||
|   ||
Kernel Load |___|_ __ _ _ _ _|- Effective
Addr(_stext)|   |  ^ |Virt. Base Addr
|   |  | |
|   |  | |
|   |reloc_offset|
|   |  | |
|   |  | |
|   |__v_|-(KERNELBASE)%TLB_SIZE
|   ||
|   ||
|   ||
Page|---||
Boundary|   ||


On BookE, we need __va()  __pa() early in the boot process to access
the device tree.

Currently this has been defined as :

#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) -
PHYSICAL_START + KERNELBASE)
where:
 PHYSICAL_START is kernstart_addr - a variable updated at runtime.
 KERNELBASE is the compile time Virtual base address of kernel.

This won't work for us, as kernstart_addr is dynamic and will yield different
results for __va()/__pa() for same mapping.

e.g.,

Let the kernel be loaded at 64MB and KERNELBASE be 0xc000 (same as
PAGE_OFFSET).

In this case, we would be mapping 0 to 0xc000, and kernstart_addr = 64M

Now __va(1MB) = (0x10) - (0x400) + 0xc000
= 0xbc10 , which is wrong.

it should be : 0xc000 + 0x10 = 0xc010

On platforms which support AMP, like PPC_47x (based on 44x), the kernel
could be loaded at highmem. Hence we cannot always depend on the compile
time constants for mapping.

Here are the possible solutions:

1) Update kernstart_addr(PHSYICAL_START) to match the Physical address of
compile time KERNELBASE value, instead of the actual Physical_Address(_stext).

The disadvantage is that we may break other users of PHYSICAL_START. They
could be replaced with __pa(_stext).

2) Redefine __va()  __pa() with relocation offset


#ifdef  CONFIG_RELOCATABLE_PPC32
#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) - PHYSICAL_START + 
(KERNELBASE + RELOC_OFFSET)))
#define __pa(x) ((unsigned long)(x) + PHYSICAL_START - (KERNELBASE + 
RELOC_OFFSET))
#endif

where, RELOC_OFFSET could be

  a) A variable, say relocation_offset (like kernstart_addr), updated
 at boot time. This impacts performance, as we have to load an additional
 variable from memory.

OR

  b) #define RELOC_OFFSET ((PHYSICAL_START  PPC_PIN_SIZE_OFFSET_MASK) - \
  (KERNELBASE  PPC_PIN_SIZE_OFFSET_MASK))

   This introduces more calculations for doing the translation.

3) Redefine __va()  __pa() with a new variable

i.e,

#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) + VIRT_PHYS_OFFSET))

where VIRT_PHYS_OFFSET :

#ifdef CONFIG_RELOCATABLE_PPC32
#define VIRT_PHYS_OFFSET virt_phys_offset
#else
#define VIRT_PHYS_OFFSET (KERNELBASE - PHYSICAL_START)
#endif /* CONFIG_RELOCATABLE_PPC32 */

where virt_phy_offset is updated at runtime to :

Effective KERNELBASE - kernstart_addr.

Taking our example, above:

virt_phys_offset = effective_kernelstart_vaddr - kernstart_addr
 = 0xc040 - 0x40
 = 0xc000
and

__va(0x10) = 0xc000 + 0x10 = 0xc010
 which is what we want.

I have implemented (3) in the following patch which has same cost of
operation as the existing one.

I have tested the patches on 440x platforms only. However this should
work fine for PPC_47x also, as we only depend on the runtime address
and the current TLB XLAT entry for the startup code, which is available
in r25. I don't have access to a 47x board yet. So, it would be great if
somebody could test this on 47x.

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: Kumar Gala ga...@kernel.crashing.org
Cc: linuxppc-dev linuxppc-dev@lists.ozlabs.org
---

 arch/powerpc/include/asm/page.h |   85 ++-
 arch/powerpc/mm/init_32.c   |7 +++
 2 files changed, 89 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
index f149967..f072e97 100644
--- a/arch/powerpc/include/asm/page.h
+++ b/arch/powerpc/include/asm/page.h
@@ -97,12 +97,26 @@ extern unsigned int HPAGE_SHIFT;
 
 extern phys_addr_t memstart_addr;
 extern phys_addr_t kernstart_addr;
+
+#ifdef CONFIG_RELOCATABLE_PPC32
+extern long long virt_phys_offset;
 #endif
+
+#endif

[PATCH v5 2/7] [44x] Enable DYNAMIC_MEMSTART for 440x

2011-12-15 Thread Suzuki K. Poulose
DYNAMIC_MEMSTART(old RELOCATABLE) was restricted only to PPC_47x variants
of 44x. This patch enables DYNAMIC_MEMSTART for 440x based chipsets.

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Cc: Josh Boyer jwbo...@gmail.com
Cc: Kumar Gala ga...@kernel.crashing.org
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: linux ppc dev linuxppc-dev@lists.ozlabs.org
---

 arch/powerpc/Kconfig   |2 +-
 arch/powerpc/kernel/head_44x.S |   12 
 2 files changed, 13 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index fac92ce..5eafe95 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -829,7 +829,7 @@ config LOWMEM_CAM_NUM
 
 config DYNAMIC_MEMSTART
bool Enable page aligned dynamic load address for kernel 
(EXPERIMENTAL)
-   depends on EXPERIMENTAL  ADVANCED_OPTIONS  FLATMEM  (FSL_BOOKE || 
PPC_47x)
+   depends on EXPERIMENTAL  ADVANCED_OPTIONS  FLATMEM  (FSL_BOOKE || 
44x)
select NONSTATIC_KERNEL
help
  This option enables the kernel to be loaded at any page aligned
diff --git a/arch/powerpc/kernel/head_44x.S b/arch/powerpc/kernel/head_44x.S
index 3df7735..d57 100644
--- a/arch/powerpc/kernel/head_44x.S
+++ b/arch/powerpc/kernel/head_44x.S
@@ -802,12 +802,24 @@ skpinv:   addir4,r4,1 /* 
Increment */
 /*
  * Configure and load pinned entry into TLB slot 63.
  */
+#ifdef CONFIG_DYNAMIC_MEMSTART
+
+   /* Read the XLAT entry for our current mapping */
+   tlbre   r25,r23,PPC44x_TLB_XLAT
+
+   lis r3,KERNELBASE@h
+   ori r3,r3,KERNELBASE@l
+
+   /* Use our current RPN entry */
+   mr  r4,r25
+#else
 
lis r3,PAGE_OFFSET@h
ori r3,r3,PAGE_OFFSET@l
 
/* Kernel is at the base of RAM */
li r4, 0/* Load the kernel physical address */
+#endif
 
/* Load the kernel PID = 0 */
li  r0,0

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH v5 3/7] [ppc] Process dynamic relocations for kernel

2011-12-15 Thread Suzuki K. Poulose
The following patch implements the dynamic relocation processing for
PPC32 kernel. relocate() accepts the target virtual address and relocates
 the kernel image to the same.

Currently the following relocation types are handled :

R_PPC_RELATIVE
R_PPC_ADDR16_LO
R_PPC_ADDR16_HI
R_PPC_ADDR16_HA

The last 3 relocations in the above list depends on value of Symbol indexed
whose index is encoded in the Relocation entry. Hence we need the Symbol
Table for processing such relocations.

Note: The GNU ld for ppc32 produces buggy relocations for relocation types
that depend on symbols. The value of the symbols with STB_LOCAL scope
should be assumed to be zero. - Alan Modra

Changes since V4:

 ( Suggested by: Segher Boessenkool seg...@kernel.crashing.org: )
 * Added 'sync' between dcbst and icbi for the modified instruction in
   relocate().
 * Replaced msync with sync.
 * Better comments on register usage in relocate().
 * Better check for relocation types in relocs_check.pl

Changes since V3:
 * Updated relocation types for ppc in arch/powerpc/relocs_check.pl

Changes since v2:
  * Flush the d-cache'd instructions and invalidate the i-cache to reflect
the processed instructions.(Reported by: Josh Poimboeuf)

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Signed-off-by: Josh Poimboeuf jpoim...@linux.vnet.ibm.com
Cc: Paul Mackerras pau...@samba.org
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: Alan Modra amo...@au1.ibm.com
Cc: Kumar Gala ga...@kernel.crashing.org
Cc: linuxppc-dev linuxppc-dev@lists.ozlabs.org
---

 arch/powerpc/Kconfig  |   41 ---
 arch/powerpc/Makefile |6 +
 arch/powerpc/kernel/Makefile  |2 
 arch/powerpc/kernel/reloc_32.S|  208 +
 arch/powerpc/kernel/vmlinux.lds.S |8 +
 arch/powerpc/relocs_check.pl  |   14 ++
 6 files changed, 256 insertions(+), 23 deletions(-)
 create mode 100644 arch/powerpc/kernel/reloc_32.S

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 5eafe95..33b1c8c 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -843,23 +843,30 @@ config DYNAMIC_MEMSTART
  load address. When this option is enabled, the compile time physical 
  address CONFIG_PHYSICAL_START is ignored.
 
-# Mapping based RELOCATABLE is moved to DYNAMIC_MEMSTART
-# config RELOCATABLE
-#  bool Build a relocatable kernel (EXPERIMENTAL)
-#  depends on EXPERIMENTAL  ADVANCED_OPTIONS  FLATMEM  (FSL_BOOKE || 
PPC_47x)
-#  help
-#This builds a kernel image that is capable of running at the
-#location the kernel is loaded at, without any alignment restrictions.
-#
-#One use is for the kexec on panic case where the recovery kernel
-#must live at a different physical address than the primary
-#kernel.
-#
-#Note: If CONFIG_RELOCATABLE=y, then the kernel runs from the address
-#it has been loaded at and the compile time physical addresses
-#CONFIG_PHYSICAL_START is ignored.  However CONFIG_PHYSICAL_START
-#setting can still be useful to bootwrappers that need to know the
-#load location of the kernel (eg. u-boot/mkimage).
+ This option is overridden by CONFIG_RELOCATABLE
+
+config RELOCATABLE
+   bool Build a relocatable kernel (EXPERIMENTAL)
+   depends on EXPERIMENTAL  ADVANCED_OPTIONS  FLATMEM
+   select NONSTATIC_KERNEL
+   help
+ This builds a kernel image that is capable of running at the
+ location the kernel is loaded at, without any alignment restrictions.
+ This feature is a superset of DYNAMIC_MEMSTART and hence overrides it.
+
+ One use is for the kexec on panic case where the recovery kernel
+ must live at a different physical address than the primary
+ kernel.
+
+ Note: If CONFIG_RELOCATABLE=y, then the kernel runs from the address
+ it has been loaded at and the compile time physical addresses
+ CONFIG_PHYSICAL_START is ignored.  However CONFIG_PHYSICAL_START
+ setting can still be useful to bootwrappers that need to know the
+ load address of the kernel (eg. u-boot/mkimage).
+
+config RELOCATABLE_PPC32
+   def_bool y
+   depends on PPC32  RELOCATABLE
 
 config PAGE_OFFSET_BOOL
bool Set custom page offset address
diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
index ffe4d88..b8b105c 100644
--- a/arch/powerpc/Makefile
+++ b/arch/powerpc/Makefile
@@ -63,9 +63,9 @@ override CC   += -m$(CONFIG_WORD_SIZE)
 override AR:= GNUTARGET=elf$(CONFIG_WORD_SIZE)-powerpc $(AR)
 endif
 
-LDFLAGS_vmlinux-yy := -Bstatic
-LDFLAGS_vmlinux-$(CONFIG_PPC64)$(CONFIG_RELOCATABLE) := -pie
-LDFLAGS_vmlinux:= $(LDFLAGS_vmlinux-yy)
+LDFLAGS_vmlinux-y := -Bstatic
+LDFLAGS_vmlinux-$(CONFIG_RELOCATABLE) := -pie
+LDFLAGS_vmlinux:= $(LDFLAGS_vmlinux-y)
 
 CFLAGS-$(CONFIG_PPC64) := -mminimal-toc -mtraceback=no -mcall-aixdesc

[PATCH v5 5/7] [44x] Enable CONFIG_RELOCATABLE for PPC44x

2011-12-15 Thread Suzuki K. Poulose
The following patch adds relocatable kernel support - based on processing
of dynamic relocations - for PPC44x kernel.

We find the runtime address of _stext and relocate ourselves based
on the following calculation.

virtual_base = ALIGN(KERNELBASE,256M) +
MODULO(_stext.run,256M)

relocate() is called with the Effective Virtual Base Address (as
shown below)

| Phys. Addr| Virt. Addr |
Page (256M) ||
Boundary|   ||
|   ||
|   ||
Kernel Load |___|_ __ _ _ _ _|- Effective
Addr(_stext)|   |  ^ |Virt. Base Addr
|   |  | |
|   |  | |
|   |reloc_offset|
|   |  | |
|   |  | |
|   |__v_|-(KERNELBASE)%256M
|   ||
|   ||
|   ||
Page(256M)  |---||
Boundary|   ||

The virt_phys_offset is updated accordingly, i.e,

virt_phys_offset = effective. kernel virt base - kernstart_addr

I have tested the patches on 440x platforms only. However this should
work fine for PPC_47x also, as we only depend on the runtime address
and the current TLB XLAT entry for the startup code, which is available
in r25. I don't have access to a 47x board yet. So, it would be great if
somebody could test this on 47x.

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: Kumar Gala ga...@kernel.crashing.org
Cc: Tony Breeds t...@bakeyournoodle.com
Cc: Josh Boyer jwbo...@gmail.com
Cc: linuxppc-dev linuxppc-dev@lists.ozlabs.org
---

 arch/powerpc/Kconfig   |2 -
 arch/powerpc/kernel/head_44x.S |   95 +++-
 2 files changed, 94 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 33b1c8c..8833df5 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -847,7 +847,7 @@ config DYNAMIC_MEMSTART
 
 config RELOCATABLE
bool Build a relocatable kernel (EXPERIMENTAL)
-   depends on EXPERIMENTAL  ADVANCED_OPTIONS  FLATMEM
+   depends on EXPERIMENTAL  ADVANCED_OPTIONS  FLATMEM  44x
select NONSTATIC_KERNEL
help
  This builds a kernel image that is capable of running at the
diff --git a/arch/powerpc/kernel/head_44x.S b/arch/powerpc/kernel/head_44x.S
index d57..885d540 100644
--- a/arch/powerpc/kernel/head_44x.S
+++ b/arch/powerpc/kernel/head_44x.S
@@ -64,6 +64,35 @@ _ENTRY(_start);
mr  r31,r3  /* save device tree ptr */
li  r24,0   /* CPU number */
 
+#ifdef CONFIG_RELOCATABLE
+/*
+ * Relocate ourselves to the current runtime address.
+ * This is called only by the Boot CPU.
+ * relocate is called with our current runtime virutal
+ * address.
+ * r21 will be loaded with the physical runtime address of _stext
+ */
+   bl  0f  /* Get our runtime address */
+0: mflrr21 /* Make it accessible */
+   addis   r21,r21,(_stext - 0b)@ha
+   addir21,r21,(_stext - 0b)@l /* Get our current runtime base 
*/
+
+   /*
+* We have the runtime (virutal) address of our base.
+* We calculate our shift of offset from a 256M page.
+* We could map the 256M page we belong to at PAGE_OFFSET and
+* get going from there.
+*/
+   lis r4,KERNELBASE@h
+   ori r4,r4,KERNELBASE@l
+   rlwinm  r6,r21,0,4,31   /* r6 = PHYS_START % 256M */
+   rlwinm  r5,r4,0,4,31/* r5 = KERNELBASE % 256M */
+   subfr3,r5,r6/* r3 = r6 - r5 */
+   add r3,r4,r3/* Required Virutal Address */
+
+   bl  relocate
+#endif
+
bl  init_cpu_state
 
/*
@@ -86,7 +115,64 @@ _ENTRY(_start);
 
bl  early_init
 
-#ifdef CONFIG_DYNAMIC_MEMSTART
+#ifdef CONFIG_RELOCATABLE
+   /*
+* Relocatable kernel support based on processing of dynamic
+* relocation entries.
+*
+* r25 will contain RPN/ERPN for the start address of memory
+* r21 will contain the current offset of _stext
+*/
+   lis r3,kernstart_addr@ha
+   la  r3,kernstart_addr@l(r3)
+
+   /*
+* Compute the kernstart_addr.
+* kernstart_addr = (r6,r8)
+* kernstart_addr  ~0xfff = (r6,r7)
+*/
+   rlwinm  r6,r25,0,28,31  /* ERPN. Bits 32-35 of Address */
+   rlwinm  r7,r25,0,0,3/* RPN - assuming 256 MB page size */
+   rlwinm  r8,r21,0,4,31   /* r8 = (_stext  0xfff) */
+   or  r8,r7,r8/* Compute the lower 32bit of kernstart_addr

[PATCH v5 6/7] [44x] Enable CRASH_DUMP for 440x

2011-12-15 Thread Suzuki K. Poulose
Now that we have relocatable kernel, supporting CRASH_DUMP only requires
turning the switches on for UP machines.

We don't have kexec support on 47x yet. Enabling SMP support would be done
as part of enabling the PPC_47x support.


Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Cc: Josh Boyer jwbo...@gmail.com
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: linuxppc-dev linuxppc-dev@lists.ozlabs.org
---

 arch/powerpc/Kconfig |4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 8833df5..fe56229 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -363,8 +363,8 @@ config KEXEC
 
 config CRASH_DUMP
bool Build a kdump crash kernel
-   depends on PPC64 || 6xx || FSL_BOOKE
-   select RELOCATABLE if PPC64
+   depends on PPC64 || 6xx || FSL_BOOKE || (44x  !SMP  !PPC_47x)
+   select RELOCATABLE if PPC64 || 44x
select DYNAMIC_MEMSTART if FSL_BOOKE
help
  Build a kernel suitable for use as a kdump capture kernel.

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH v5 7/7] [boot] Change the load address for the wrapper to fit the kernel

2011-12-15 Thread Suzuki K. Poulose
The wrapper code which uncompresses the kernel in case of a 'ppc' boot
is by default loaded at 0x0040 and the kernel will be uncompressed
to fit the location 0-0x0040. But with dynamic relocations, the size
of the kernel may exceed 0x0040(4M). This would cause an overlap
of the uncompressed kernel and the boot wrapper, causing a failure in
boot.

The message looks like :


   zImage starting: loaded at 0x0040 (sp: 0x0065ffb0)
   Allocating 0x5ce650 bytes for kernel ...
   Insufficient memory for kernel at address 0! (_start=0040, uncompressed 
size=00591a20)

This patch shifts the load address of the boot wrapper code to the next higher 
MB,
according to the size of  the uncompressed vmlinux.

With the patch, we get the following message while building the image :

 WARN: Uncompressed kernel (size 0x5b0344) overlaps the address of the 
wrapper(0x40)
 WARN: Fixing the link_address of wrapper to (0x60)


Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
---

 arch/powerpc/boot/wrapper |   20 
 1 files changed, 20 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/boot/wrapper b/arch/powerpc/boot/wrapper
index 14cd4bc..c8d6aaf 100755
--- a/arch/powerpc/boot/wrapper
+++ b/arch/powerpc/boot/wrapper
@@ -257,6 +257,8 @@ vmz=$tmpdir/`basename \$kernel\`.$ext
 if [ -z $cacheit -o ! -f $vmz$gzip -o $vmz$gzip -ot $kernel ]; then
 ${CROSS}objcopy $objflags $kernel $vmz.$$
 
+strip_size=$(stat -c %s $vmz.$$)
+
 if [ -n $gzip ]; then
 gzip -n -f -9 $vmz.$$
 fi
@@ -266,6 +268,24 @@ if [ -z $cacheit -o ! -f $vmz$gzip -o $vmz$gzip -ot 
$kernel ]; then
 else
vmz=$vmz.$$
 fi
+else
+# Calculate the vmlinux.strip size
+${CROSS}objcopy $objflags $kernel $vmz.$$
+strip_size=$(stat -c %s $vmz.$$)
+rm -f $vmz.$$
+fi
+
+# Round the size to next higher MB limit
+round_size=$(((strip_size + 0xf)  0xfff0))
+
+round_size=0x$(printf %x $round_size)
+link_addr=$(printf %d $link_address)
+
+if [ $link_addr -lt $strip_size ]; then
+echo WARN: Uncompressed kernel (size 0x$(printf %x\n $strip_size)) \
+   overlaps the address of the wrapper($link_address)
+echo WARN: Fixing the link_address of wrapper to ($round_size)
+link_address=$round_size
 fi
 
 vmz=$vmz$gzip

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH v4 0/7] Kudmp support for PPC440x

2011-12-09 Thread Suzuki K. Poulose
The following series implements:

 * Generic framework for relocatable kernel on PPC32, based on processing 
   the dynamic relocation entries.
 * Relocatable kernel support for 44x
 * Kdump support for 44x. Doesn't support 47x yet, as the kexec 
   support is missing.

Changes from V3:

 * Added a new config - NONSTATIC_KERNEL - to group different types of 
relocatable
   kernel. (Suggested by: Josh Boyer)
 * Added supported ppc relocation types in relocs_check.pl for verifying the
   relocations used in the kernel.

Changes from V2:

 * Renamed old style mapping based RELOCATABLE on BookE to DYNAMIC_MEMSTART.
   Suggested by: Scott Wood
 * Added support for DYNAMIC_MEMSTART on PPC440x
 * Reverted back to RELOCATABLE and RELOCATABLE_PPC32 from RELOCATABLE_PPC32_PIE
   for relocation based on processing dynamic reloc entries for PPC32.
 * Ensure the modified instructions are flushed and the i-cache invalidated at
   the end of relocate(). - Reported by : Josh Poimboeuf

Changes from V1:

 * Splitted patch 'Enable CONFIG_RELOCATABLE for PPC44x' to move some
   of the generic bits to a new patch.
 * Renamed RELOCATABLE_PPC32 to RELOCATABLE_PPC32_PIE and provided options to
   retained old style mapping. (Suggested by: Scott Wood)
 * Added support for avoiding the overlapping of uncompressed kernel
   with boot wrapper for PPC images.

The patches are based on -next tree for ppc.

I have tested these patches on Ebony, Sequoia and Virtex(QEMU Emulated).
I haven't tested the RELOCATABLE bits on PPC_47x yet, as I don't have access
to one. However, RELOCATABLE should work fine there as we only depend on the 
runtime address and the XLAT entry setup by the boot loader. It would be great 
if
somebody could test these patches on a 47x.


---

Suzuki K. Poulose (7):
  [boot] Change the load address for the wrapper to fit the kernel
  [44x] Enable CRASH_DUMP for 440x
  [44x] Enable CONFIG_RELOCATABLE for PPC44x
  [ppc] Define virtual-physical translations for RELOCATABLE
  [ppc] Process dynamic relocations for kernel
  [44x] Enable DYNAMIC_MEMSTART for 440x
  [booke] Rename mapping based RELOCATABLE to DYNAMIC_MEMSTART for BookE


 arch/powerpc/Kconfig  |   46 +--
 arch/powerpc/Makefile |6 +
 arch/powerpc/boot/wrapper |   20 +
 arch/powerpc/configs/44x/iss476-smp_defconfig |2 
 arch/powerpc/include/asm/kdump.h  |4 -
 arch/powerpc/include/asm/page.h   |   89 -
 arch/powerpc/kernel/Makefile  |2 
 arch/powerpc/kernel/crash_dump.c  |4 -
 arch/powerpc/kernel/head_44x.S|  105 +
 arch/powerpc/kernel/head_fsl_booke.S  |2 
 arch/powerpc/kernel/machine_kexec.c   |2 
 arch/powerpc/kernel/prom_init.c   |2 
 arch/powerpc/kernel/vmlinux.lds.S |8 ++
 arch/powerpc/mm/44x_mmu.c |2 
 arch/powerpc/mm/init_32.c |7 ++
 arch/powerpc/relocs_check.pl  |7 ++
 16 files changed, 282 insertions(+), 26 deletions(-)

--
Suzuki

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH v4 1/7] [booke] Rename mapping based RELOCATABLE to DYNAMIC_MEMSTART for BookE

2011-12-09 Thread Suzuki K. Poulose
The current implementation of CONFIG_RELOCATABLE in BookE is based
on mapping the page aligned kernel load address to KERNELBASE. This
approach however is not enough for platforms, where the TLB page size
is large (e.g, 256M on 44x). So we are renaming the RELOCATABLE used
currently in BookE to DYNAMIC_MEMSTART to reflect the actual method.

The CONFIG_RELOCATABLE for PPC32(BookE) based on processing of the
dynamic relocations will be introduced in the later in the patch series.

This change would allow the use of the old method of RELOCATABLE for
platforms which can afford to enforce the page alignment (platforms with
smaller TLB size).

Changes since v3:

* Introduced a new config, NONSTATIC_KERNEL, to denote a kernel which is
  either a RELOCATABLE or DYNAMIC_MEMSTART(Suggested by: Josh Boyer)

Suggested-by: Scott Wood scottw...@freescale.com
Tested-by: Scott Wood scottw...@freescale.com

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Cc: Scott Wood scottw...@freescale.com
Cc: Kumar Gala ga...@kernel.crashing.org
Cc: Josh Boyer jwbo...@gmail.com
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: linux ppc dev linuxppc-dev@lists.ozlabs.org
---

 arch/powerpc/Kconfig  |   60 +
 arch/powerpc/configs/44x/iss476-smp_defconfig |2 -
 arch/powerpc/include/asm/kdump.h  |4 +-
 arch/powerpc/include/asm/page.h   |4 +-
 arch/powerpc/kernel/crash_dump.c  |4 +-
 arch/powerpc/kernel/head_44x.S|4 +-
 arch/powerpc/kernel/head_fsl_booke.S  |2 -
 arch/powerpc/kernel/machine_kexec.c   |2 -
 arch/powerpc/kernel/prom_init.c   |2 -
 arch/powerpc/mm/44x_mmu.c |2 -
 10 files changed, 55 insertions(+), 31 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 7c93c7e..fac92ce 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -364,7 +364,8 @@ config KEXEC
 config CRASH_DUMP
bool Build a kdump crash kernel
depends on PPC64 || 6xx || FSL_BOOKE
-   select RELOCATABLE if PPC64 || FSL_BOOKE
+   select RELOCATABLE if PPC64
+   select DYNAMIC_MEMSTART if FSL_BOOKE
help
  Build a kernel suitable for use as a kdump capture kernel.
  The same kernel binary can be used as production kernel and dump
@@ -773,6 +774,10 @@ source drivers/rapidio/Kconfig
 
 endmenu
 
+config NONSTATIC_KERNEL
+   bool
+   default n
+
 menu Advanced setup
depends on PPC32
 
@@ -822,23 +827,39 @@ config LOWMEM_CAM_NUM
int Number of CAMs to use to map low memory if LOWMEM_CAM_NUM_BOOL
default 3
 
-config RELOCATABLE
-   bool Build a relocatable kernel (EXPERIMENTAL)
+config DYNAMIC_MEMSTART
+   bool Enable page aligned dynamic load address for kernel 
(EXPERIMENTAL)
depends on EXPERIMENTAL  ADVANCED_OPTIONS  FLATMEM  (FSL_BOOKE || 
PPC_47x)
-   help
- This builds a kernel image that is capable of running at the
- location the kernel is loaded at (some alignment restrictions may
- exist).
-
- One use is for the kexec on panic case where the recovery kernel
- must live at a different physical address than the primary
- kernel.
-
- Note: If CONFIG_RELOCATABLE=y, then the kernel runs from the address
- it has been loaded at and the compile time physical addresses
- CONFIG_PHYSICAL_START is ignored.  However CONFIG_PHYSICAL_START
- setting can still be useful to bootwrappers that need to know the
- load location of the kernel (eg. u-boot/mkimage).
+   select NONSTATIC_KERNEL
+   help
+ This option enables the kernel to be loaded at any page aligned
+ physical address. The kernel creates a mapping from KERNELBASE to 
+ the address where the kernel is loaded. The page size here implies
+ the TLB page size of the mapping for kernel on the particular 
platform.
+ Please refer to the init code for finding the TLB page size.
+
+ DYNAMIC_MEMSTART is an easy way of implementing pseudo-RELOCATABLE
+ kernel image, where the only restriction is the page aligned kernel
+ load address. When this option is enabled, the compile time physical 
+ address CONFIG_PHYSICAL_START is ignored.
+
+# Mapping based RELOCATABLE is moved to DYNAMIC_MEMSTART
+# config RELOCATABLE
+#  bool Build a relocatable kernel (EXPERIMENTAL)
+#  depends on EXPERIMENTAL  ADVANCED_OPTIONS  FLATMEM  (FSL_BOOKE || 
PPC_47x)
+#  help
+#This builds a kernel image that is capable of running at the
+#location the kernel is loaded at, without any alignment restrictions.
+#
+#One use is for the kexec on panic case where the recovery kernel
+#must live at a different physical address than the primary
+#kernel.
+#
+#Note: If CONFIG_RELOCATABLE=y, then the kernel runs from

[PATCH v4 2/7] [44x] Enable DYNAMIC_MEMSTART for 440x

2011-12-09 Thread Suzuki K. Poulose
DYNAMIC_MEMSTART(old RELOCATABLE) was restricted only to PPC_47x variants
of 44x. This patch enables DYNAMIC_MEMSTART for 440x based chipsets.

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Cc: Josh Boyer jwbo...@gmail.com
Cc: Kumar Gala ga...@kernel.crashing.org
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: linux ppc dev linuxppc-dev@lists.ozlabs.org
---

 arch/powerpc/Kconfig   |2 +-
 arch/powerpc/kernel/head_44x.S |   12 
 2 files changed, 13 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index fac92ce..5eafe95 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -829,7 +829,7 @@ config LOWMEM_CAM_NUM
 
 config DYNAMIC_MEMSTART
bool Enable page aligned dynamic load address for kernel 
(EXPERIMENTAL)
-   depends on EXPERIMENTAL  ADVANCED_OPTIONS  FLATMEM  (FSL_BOOKE || 
PPC_47x)
+   depends on EXPERIMENTAL  ADVANCED_OPTIONS  FLATMEM  (FSL_BOOKE || 
44x)
select NONSTATIC_KERNEL
help
  This option enables the kernel to be loaded at any page aligned
diff --git a/arch/powerpc/kernel/head_44x.S b/arch/powerpc/kernel/head_44x.S
index d5f787d..62a4cd5 100644
--- a/arch/powerpc/kernel/head_44x.S
+++ b/arch/powerpc/kernel/head_44x.S
@@ -802,12 +802,24 @@ skpinv:   addir4,r4,1 /* 
Increment */
 /*
  * Configure and load pinned entry into TLB slot 63.
  */
+#ifdef CONFIG_DYNAMIC_MEMSTART
+
+   /* Read the XLAT entry for our current mapping */
+   tlbre   r25,r23,PPC44x_TLB_XLAT
+
+   lis r3,KERNELBASE@h
+   ori r3,r3,KERNELBASE@l
+
+   /* Use our current RPN entry */
+   mr  r4,r25
+#else
 
lis r3,PAGE_OFFSET@h
ori r3,r3,PAGE_OFFSET@l
 
/* Kernel is at the base of RAM */
li r4, 0/* Load the kernel physical address */
+#endif
 
/* Load the kernel PID = 0 */
li  r0,0

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH v4 3/7] [ppc] Process dynamic relocations for kernel

2011-12-09 Thread Suzuki K. Poulose
The following patch implements the dynamic relocation processing for
PPC32 kernel. relocate() accepts the target virtual address and relocates
 the kernel image to the same.

Currently the following relocation types are handled :

R_PPC_RELATIVE
R_PPC_ADDR16_LO
R_PPC_ADDR16_HI
R_PPC_ADDR16_HA

The last 3 relocations in the above list depends on value of Symbol indexed
whose index is encoded in the Relocation entry. Hence we need the Symbol
Table for processing such relocations.

Note: The GNU ld for ppc32 produces buggy relocations for relocation types
that depend on symbols. The value of the symbols with STB_LOCAL scope
should be assumed to be zero. - Alan Modra

Changes since V3:
 * Updated relocation types for ppc in arch/powerpc/relocs_check.pl

Changes since v2:
  * Flush the d-cache'd instructions and invalidate the i-cache to reflect
the processed instructions.(Reported by: Josh Poimboeuf)

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Signed-off-by: Josh Poimboeuf jpoim...@linux.vnet.ibm.com
Cc: Paul Mackerras pau...@samba.org
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: Alan Modra amo...@au1.ibm.com
Cc: Kumar Gala ga...@kernel.crashing.org
Cc: linuxppc-dev linuxppc-dev@lists.ozlabs.org
---

 arch/powerpc/Kconfig  |   42 ++---
 arch/powerpc/Makefile |6 +++--
 arch/powerpc/kernel/Makefile  |2 ++
 arch/powerpc/kernel/vmlinux.lds.S |8 ++-
 arch/powerpc/relocs_check.pl  |7 ++
 5 files changed, 44 insertions(+), 21 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 5eafe95..6936cb0 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -843,23 +843,31 @@ config DYNAMIC_MEMSTART
  load address. When this option is enabled, the compile time physical 
  address CONFIG_PHYSICAL_START is ignored.
 
-# Mapping based RELOCATABLE is moved to DYNAMIC_MEMSTART
-# config RELOCATABLE
-#  bool Build a relocatable kernel (EXPERIMENTAL)
-#  depends on EXPERIMENTAL  ADVANCED_OPTIONS  FLATMEM  (FSL_BOOKE || 
PPC_47x)
-#  help
-#This builds a kernel image that is capable of running at the
-#location the kernel is loaded at, without any alignment restrictions.
-#
-#One use is for the kexec on panic case where the recovery kernel
-#must live at a different physical address than the primary
-#kernel.
-#
-#Note: If CONFIG_RELOCATABLE=y, then the kernel runs from the address
-#it has been loaded at and the compile time physical addresses
-#CONFIG_PHYSICAL_START is ignored.  However CONFIG_PHYSICAL_START
-#setting can still be useful to bootwrappers that need to know the
-#load location of the kernel (eg. u-boot/mkimage).
+ This option is overridden by RELOCATABLE.
+
+config RELOCATABLE
+   bool Build a relocatable kernel (EXPERIMENTAL)
+   depends on EXPERIMENTAL  ADVANCED_OPTIONS  FLATMEM
+   select NONSTATIC_KERNEL
+   help
+ This builds a kernel image that is capable of running at the
+ location the kernel is loaded at, without any alignment restrictions.
+ This feature is a superset of DYNAMIC_MEMSTART, and hence overrides 
+ it.
+
+ One use is for the kexec on panic case where the recovery kernel
+ must live at a different physical address than the primary
+ kernel.
+
+ Note: If CONFIG_RELOCATABLE=y, then the kernel runs from the address
+ it has been loaded at and the compile time physical addresses
+ CONFIG_PHYSICAL_START is ignored.  However CONFIG_PHYSICAL_START
+ setting can still be useful to bootwrappers that need to know the
+ load address of the kernel (eg. u-boot/mkimage).
+
+config RELOCATABLE_PPC32
+   def_bool y
+   depends on PPC32  RELOCATABLE
 
 config PAGE_OFFSET_BOOL
bool Set custom page offset address
diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
index ffe4d88..b8b105c 100644
--- a/arch/powerpc/Makefile
+++ b/arch/powerpc/Makefile
@@ -63,9 +63,9 @@ override CC   += -m$(CONFIG_WORD_SIZE)
 override AR:= GNUTARGET=elf$(CONFIG_WORD_SIZE)-powerpc $(AR)
 endif
 
-LDFLAGS_vmlinux-yy := -Bstatic
-LDFLAGS_vmlinux-$(CONFIG_PPC64)$(CONFIG_RELOCATABLE) := -pie
-LDFLAGS_vmlinux:= $(LDFLAGS_vmlinux-yy)
+LDFLAGS_vmlinux-y := -Bstatic
+LDFLAGS_vmlinux-$(CONFIG_RELOCATABLE) := -pie
+LDFLAGS_vmlinux:= $(LDFLAGS_vmlinux-y)
 
 CFLAGS-$(CONFIG_PPC64) := -mminimal-toc -mtraceback=no -mcall-aixdesc
 CFLAGS-$(CONFIG_PPC32) := -ffixed-r2 -mmultiple
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index ce4f7f1..ee728e4 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -85,6 +85,8 @@ extra-$(CONFIG_FSL_BOOKE) := head_fsl_booke.o
 extra-$(CONFIG_8xx):= head_8xx.o
 extra-y

[PATCH v4 4/7] [ppc] Define virtual-physical translations for RELOCATABLE

2011-12-09 Thread Suzuki K. Poulose
We find the runtime address of _stext and relocate ourselves based
on the following calculation.

virtual_base = ALIGN(KERNELBASE,KERNEL_TLB_PIN_SIZE) +
MODULO(_stext.run,KERNEL_TLB_PIN_SIZE)

relocate() is called with the Effective Virtual Base Address (as
shown below)

| Phys. Addr| Virt. Addr |
Page||
Boundary|   ||
|   ||
|   ||
Kernel Load |___|_ __ _ _ _ _|- Effective
Addr(_stext)|   |  ^ |Virt. Base Addr
|   |  | |
|   |  | |
|   |reloc_offset|
|   |  | |
|   |  | |
|   |__v_|-(KERNELBASE)%TLB_SIZE
|   ||
|   ||
|   ||
Page|---||
Boundary|   ||


On BookE, we need __va()  __pa() early in the boot process to access
the device tree.

Currently this has been defined as :

#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) -
PHYSICAL_START + KERNELBASE)
where:
 PHYSICAL_START is kernstart_addr - a variable updated at runtime.
 KERNELBASE is the compile time Virtual base address of kernel.

This won't work for us, as kernstart_addr is dynamic and will yield different
results for __va()/__pa() for same mapping.

e.g.,

Let the kernel be loaded at 64MB and KERNELBASE be 0xc000 (same as
PAGE_OFFSET).

In this case, we would be mapping 0 to 0xc000, and kernstart_addr = 64M

Now __va(1MB) = (0x10) - (0x400) + 0xc000
= 0xbc10 , which is wrong.

it should be : 0xc000 + 0x10 = 0xc010

On platforms which support AMP, like PPC_47x (based on 44x), the kernel
could be loaded at highmem. Hence we cannot always depend on the compile
time constants for mapping.

Here are the possible solutions:

1) Update kernstart_addr(PHSYICAL_START) to match the Physical address of
compile time KERNELBASE value, instead of the actual Physical_Address(_stext).

The disadvantage is that we may break other users of PHYSICAL_START. They
could be replaced with __pa(_stext).

2) Redefine __va()  __pa() with relocation offset


#ifdef  CONFIG_RELOCATABLE_PPC32
#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) - PHYSICAL_START + 
(KERNELBASE + RELOC_OFFSET)))
#define __pa(x) ((unsigned long)(x) + PHYSICAL_START - (KERNELBASE + 
RELOC_OFFSET))
#endif

where, RELOC_OFFSET could be

  a) A variable, say relocation_offset (like kernstart_addr), updated
 at boot time. This impacts performance, as we have to load an additional
 variable from memory.

OR

  b) #define RELOC_OFFSET ((PHYSICAL_START  PPC_PIN_SIZE_OFFSET_MASK) - \
  (KERNELBASE  PPC_PIN_SIZE_OFFSET_MASK))

   This introduces more calculations for doing the translation.

3) Redefine __va()  __pa() with a new variable

i.e,

#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) + VIRT_PHYS_OFFSET))

where VIRT_PHYS_OFFSET :

#ifdef CONFIG_RELOCATABLE_PPC32
#define VIRT_PHYS_OFFSET virt_phys_offset
#else
#define VIRT_PHYS_OFFSET (KERNELBASE - PHYSICAL_START)
#endif /* CONFIG_RELOCATABLE_PPC32 */

where virt_phy_offset is updated at runtime to :

Effective KERNELBASE - kernstart_addr.

Taking our example, above:

virt_phys_offset = effective_kernelstart_vaddr - kernstart_addr
 = 0xc040 - 0x40
 = 0xc000
and

__va(0x10) = 0xc000 + 0x10 = 0xc010
 which is what we want.

I have implemented (3) in the following patch which has same cost of
operation as the existing one.

I have tested the patches on 440x platforms only. However this should
work fine for PPC_47x also, as we only depend on the runtime address
and the current TLB XLAT entry for the startup code, which is available
in r25. I don't have access to a 47x board yet. So, it would be great if
somebody could test this on 47x.

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: Kumar Gala ga...@kernel.crashing.org
Cc: linuxppc-dev linuxppc-dev@lists.ozlabs.org
---

 arch/powerpc/include/asm/page.h |   85 ++-
 arch/powerpc/mm/init_32.c   |7 +++
 2 files changed, 89 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
index f149967..f072e97 100644
--- a/arch/powerpc/include/asm/page.h
+++ b/arch/powerpc/include/asm/page.h
@@ -97,12 +97,26 @@ extern unsigned int HPAGE_SHIFT;
 
 extern phys_addr_t memstart_addr;
 extern phys_addr_t kernstart_addr;
+
+#ifdef CONFIG_RELOCATABLE_PPC32
+extern long long virt_phys_offset;
 #endif
+
+#endif

[PATCH v4 5/7] [44x] Enable CONFIG_RELOCATABLE for PPC44x

2011-12-09 Thread Suzuki K. Poulose
The following patch adds relocatable kernel support - based on processing
of dynamic relocations - for PPC44x kernel.

We find the runtime address of _stext and relocate ourselves based
on the following calculation.

virtual_base = ALIGN(KERNELBASE,256M) +
MODULO(_stext.run,256M)

relocate() is called with the Effective Virtual Base Address (as
shown below)

| Phys. Addr| Virt. Addr |
Page (256M) ||
Boundary|   ||
|   ||
|   ||
Kernel Load |___|_ __ _ _ _ _|- Effective
Addr(_stext)|   |  ^ |Virt. Base Addr
|   |  | |
|   |  | |
|   |reloc_offset|
|   |  | |
|   |  | |
|   |__v_|-(KERNELBASE)%256M
|   ||
|   ||
|   ||
Page(256M)  |---||
Boundary|   ||

The virt_phys_offset is updated accordingly, i.e,

virt_phys_offset = effective. kernel virt base - kernstart_addr

I have tested the patches on 440x platforms only. However this should
work fine for PPC_47x also, as we only depend on the runtime address
and the current TLB XLAT entry for the startup code, which is available
in r25. I don't have access to a 47x board yet. So, it would be great if
somebody could test this on 47x.

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: Kumar Gala ga...@kernel.crashing.org
Cc: Tony Breeds t...@bakeyournoodle.com
Cc: Josh Boyer jwbo...@gmail.com
Cc: linuxppc-dev linuxppc-dev@lists.ozlabs.org
---

 arch/powerpc/Kconfig   |2 -
 arch/powerpc/kernel/head_44x.S |   95 +++-
 2 files changed, 94 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 6936cb0..90cd8d3 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -847,7 +847,7 @@ config DYNAMIC_MEMSTART
 
 config RELOCATABLE
bool Build a relocatable kernel (EXPERIMENTAL)
-   depends on EXPERIMENTAL  ADVANCED_OPTIONS  FLATMEM
+   depends on EXPERIMENTAL  ADVANCED_OPTIONS  FLATMEM  44x
select NONSTATIC_KERNEL
help
  This builds a kernel image that is capable of running at the
diff --git a/arch/powerpc/kernel/head_44x.S b/arch/powerpc/kernel/head_44x.S
index 62a4cd5..213ed31 100644
--- a/arch/powerpc/kernel/head_44x.S
+++ b/arch/powerpc/kernel/head_44x.S
@@ -64,6 +64,35 @@ _ENTRY(_start);
mr  r31,r3  /* save device tree ptr */
li  r24,0   /* CPU number */
 
+#ifdef CONFIG_RELOCATABLE
+/*
+ * Relocate ourselves to the current runtime address.
+ * This is called only by the Boot CPU.
+ * relocate is called with our current runtime virutal
+ * address.
+ * r21 will be loaded with the physical runtime address of _stext
+ */
+   bl  0f  /* Get our runtime address */
+0: mflrr21 /* Make it accessible */
+   addis   r21,r21,(_stext - 0b)@ha
+   addir21,r21,(_stext - 0b)@l /* Get our current runtime base 
*/
+
+   /*
+* We have the runtime (virutal) address of our base.
+* We calculate our shift of offset from a 256M page.
+* We could map the 256M page we belong to at PAGE_OFFSET and
+* get going from there.
+*/
+   lis r4,KERNELBASE@h
+   ori r4,r4,KERNELBASE@l
+   rlwinm  r6,r21,0,4,31   /* r6 = PHYS_START % 256M */
+   rlwinm  r5,r4,0,4,31/* r5 = KERNELBASE % 256M */
+   subfr3,r5,r6/* r3 = r6 - r5 */
+   add r3,r4,r3/* Required Virutal Address */
+
+   bl  relocate
+#endif
+
bl  init_cpu_state
 
/*
@@ -86,7 +115,64 @@ _ENTRY(_start);
 
bl  early_init
 
-#ifdef CONFIG_DYNAMIC_MEMSTART
+#ifdef CONFIG_RELOCATABLE
+   /*
+* Relocatable kernel support based on processing of dynamic
+* relocation entries.
+*
+* r25 will contain RPN/ERPN for the start address of memory
+* r21 will contain the current offset of _stext
+*/
+   lis r3,kernstart_addr@ha
+   la  r3,kernstart_addr@l(r3)
+
+   /*
+* Compute the kernstart_addr.
+* kernstart_addr = (r6,r8)
+* kernstart_addr  ~0xfff = (r6,r7)
+*/
+   rlwinm  r6,r25,0,28,31  /* ERPN. Bits 32-35 of Address */
+   rlwinm  r7,r25,0,0,3/* RPN - assuming 256 MB page size */
+   rlwinm  r8,r21,0,4,31   /* r8 = (_stext  0xfff) */
+   or  r8,r7,r8/* Compute the lower 32bit of kernstart_addr

[PATCH v4 6/7] [44x] Enable CRASH_DUMP for 440x

2011-12-09 Thread Suzuki K. Poulose
Now that we have relocatable kernel, supporting CRASH_DUMP only requires
turning the switches on for UP machines.

We don't have kexec support on 47x yet. Enabling SMP support would be done
as part of enabling the PPC_47x support.


Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Cc: Josh Boyer jwbo...@gmail.com
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: linuxppc-dev linuxppc-dev@lists.ozlabs.org
---

 arch/powerpc/Kconfig |4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 90cd8d3..d612943 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -363,8 +363,8 @@ config KEXEC
 
 config CRASH_DUMP
bool Build a kdump crash kernel
-   depends on PPC64 || 6xx || FSL_BOOKE
-   select RELOCATABLE if PPC64
+   depends on PPC64 || 6xx || FSL_BOOKE || (44x  !SMP  !PPC_47x)
+   select RELOCATABLE if PPC64 || 44x
select DYNAMIC_MEMSTART if FSL_BOOKE
help
  Build a kernel suitable for use as a kdump capture kernel.

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH v4 7/7] [boot] Change the load address for the wrapper to fit the kernel

2011-12-09 Thread Suzuki K. Poulose
The wrapper code which uncompresses the kernel in case of a 'ppc' boot
is by default loaded at 0x0040 and the kernel will be uncompressed
to fit the location 0-0x0040. But with dynamic relocations, the size
of the kernel may exceed 0x0040(4M). This would cause an overlap
of the uncompressed kernel and the boot wrapper, causing a failure in
boot.

The message looks like :


   zImage starting: loaded at 0x0040 (sp: 0x0065ffb0)
   Allocating 0x5ce650 bytes for kernel ...
   Insufficient memory for kernel at address 0! (_start=0040, uncompressed 
size=00591a20)

This patch shifts the load address of the boot wrapper code to the next higher 
MB,
according to the size of  the uncompressed vmlinux.

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
---

 arch/powerpc/boot/wrapper |   20 
 1 files changed, 20 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/boot/wrapper b/arch/powerpc/boot/wrapper
index 14cd4bc..4d625cd 100755
--- a/arch/powerpc/boot/wrapper
+++ b/arch/powerpc/boot/wrapper
@@ -257,6 +257,8 @@ vmz=$tmpdir/`basename \$kernel\`.$ext
 if [ -z $cacheit -o ! -f $vmz$gzip -o $vmz$gzip -ot $kernel ]; then
 ${CROSS}objcopy $objflags $kernel $vmz.$$
 
+strip_size=$(stat -c %s $vmz.$$)
+
 if [ -n $gzip ]; then
 gzip -n -f -9 $vmz.$$
 fi
@@ -266,6 +268,24 @@ if [ -z $cacheit -o ! -f $vmz$gzip -o $vmz$gzip -ot 
$kernel ]; then
 else
vmz=$vmz.$$
 fi
+else
+# Calculate the vmlinux.strip size
+${CROSS}objcopy $objflags $kernel $vmz.$$
+strip_size=$(stat -c %s $vmz.$$)
+rm -f $vmz.$$
+fi
+
+# Round the size to next higher MB limit
+round_size=$(((strip_size + 0xf)  0xfff0))
+
+round_size=0x$(printf %x $round_size)
+link_addr=$(printf %d $link_address)
+
+if [ $link_addr -lt $strip_size ]; then
+echo WARN: Uncompressed kernel size(0x$(printf %x\n $strip_size)) \
+exceeds the address of the wrapper($link_address)
+echo WARN: Fixing the link_address to ($round_size))
+link_address=$round_size
 fi
 
 vmz=$vmz$gzip

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[UPDATED] [PATCH v4 3/7] [ppc] Process dynamic relocations for kernel

2011-12-09 Thread Suzuki K. Poulose
The following patch implements the dynamic relocation processing for
PPC32 kernel. relocate() accepts the target virtual address and relocates
 the kernel image to the same.

Currently the following relocation types are handled :

R_PPC_RELATIVE
R_PPC_ADDR16_LO
R_PPC_ADDR16_HI
R_PPC_ADDR16_HA

The last 3 relocations in the above list depends on value of Symbol indexed
whose index is encoded in the Relocation entry. Hence we need the Symbol
Table for processing such relocations.

Note: The GNU ld for ppc32 produces buggy relocations for relocation types
that depend on symbols. The value of the symbols with STB_LOCAL scope
should be assumed to be zero. - Alan Modra

Changes since V3:
 * Updated relocation types for ppc in arch/powerpc/relocs_check.pl

Changes since v2:
  * Flush the d-cache'd instructions and invalidate the i-cache to reflect
the processed instructions.(Reported by: Josh Poimboeuf)

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Signed-off-by: Josh Poimboeuf jpoim...@linux.vnet.ibm.com
Cc: Paul Mackerras pau...@samba.org
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: Alan Modra amo...@au1.ibm.com
Cc: Kumar Gala ga...@kernel.crashing.org
Cc: linuxppc-dev linuxppc-dev@lists.ozlabs.org
---

 arch/powerpc/Kconfig  |   42 
 arch/powerpc/Makefile |6 +
 arch/powerpc/kernel/Makefile  |2 
 arch/powerpc/kernel/reloc_32.S|  207 +
 arch/powerpc/kernel/vmlinux.lds.S |8 +
 arch/powerpc/relocs_check.pl  |7 +
 6 files changed, 251 insertions(+), 21 deletions(-)
 create mode 100644 arch/powerpc/kernel/reloc_32.S

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 5eafe95..6936cb0 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -843,23 +843,31 @@ config DYNAMIC_MEMSTART
  load address. When this option is enabled, the compile time physical 
  address CONFIG_PHYSICAL_START is ignored.
 
-# Mapping based RELOCATABLE is moved to DYNAMIC_MEMSTART
-# config RELOCATABLE
-#  bool Build a relocatable kernel (EXPERIMENTAL)
-#  depends on EXPERIMENTAL  ADVANCED_OPTIONS  FLATMEM  (FSL_BOOKE || 
PPC_47x)
-#  help
-#This builds a kernel image that is capable of running at the
-#location the kernel is loaded at, without any alignment restrictions.
-#
-#One use is for the kexec on panic case where the recovery kernel
-#must live at a different physical address than the primary
-#kernel.
-#
-#Note: If CONFIG_RELOCATABLE=y, then the kernel runs from the address
-#it has been loaded at and the compile time physical addresses
-#CONFIG_PHYSICAL_START is ignored.  However CONFIG_PHYSICAL_START
-#setting can still be useful to bootwrappers that need to know the
-#load location of the kernel (eg. u-boot/mkimage).
+ This option is overridden by RELOCATABLE.
+
+config RELOCATABLE
+   bool Build a relocatable kernel (EXPERIMENTAL)
+   depends on EXPERIMENTAL  ADVANCED_OPTIONS  FLATMEM
+   select NONSTATIC_KERNEL
+   help
+ This builds a kernel image that is capable of running at the
+ location the kernel is loaded at, without any alignment restrictions.
+ This feature is a superset of DYNAMIC_MEMSTART, and hence overrides 
+ it.
+
+ One use is for the kexec on panic case where the recovery kernel
+ must live at a different physical address than the primary
+ kernel.
+
+ Note: If CONFIG_RELOCATABLE=y, then the kernel runs from the address
+ it has been loaded at and the compile time physical addresses
+ CONFIG_PHYSICAL_START is ignored.  However CONFIG_PHYSICAL_START
+ setting can still be useful to bootwrappers that need to know the
+ load address of the kernel (eg. u-boot/mkimage).
+
+config RELOCATABLE_PPC32
+   def_bool y
+   depends on PPC32  RELOCATABLE
 
 config PAGE_OFFSET_BOOL
bool Set custom page offset address
diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
index ffe4d88..b8b105c 100644
--- a/arch/powerpc/Makefile
+++ b/arch/powerpc/Makefile
@@ -63,9 +63,9 @@ override CC   += -m$(CONFIG_WORD_SIZE)
 override AR:= GNUTARGET=elf$(CONFIG_WORD_SIZE)-powerpc $(AR)
 endif
 
-LDFLAGS_vmlinux-yy := -Bstatic
-LDFLAGS_vmlinux-$(CONFIG_PPC64)$(CONFIG_RELOCATABLE) := -pie
-LDFLAGS_vmlinux:= $(LDFLAGS_vmlinux-yy)
+LDFLAGS_vmlinux-y := -Bstatic
+LDFLAGS_vmlinux-$(CONFIG_RELOCATABLE) := -pie
+LDFLAGS_vmlinux:= $(LDFLAGS_vmlinux-y)
 
 CFLAGS-$(CONFIG_PPC64) := -mminimal-toc -mtraceback=no -mcall-aixdesc
 CFLAGS-$(CONFIG_PPC32) := -ffixed-r2 -mmultiple
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index ce4f7f1..ee728e4 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -85,6 +85,8 @@ extra-$(CONFIG_FSL_BOOKE) := head_fsl_booke.o
 extra

[UPDATED] [PATCH v3 1/8] [44x] Fix typo in KEXEC CONFIG dependency

2011-11-14 Thread Suzuki K. Poulose
Kexec is not supported on 47x. 47x is a variant of 44x with slightly
different MMU and SMP support. There was a typo in the config
dependency for KEXEC. This patch fixes the same.

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Signed-off-by: Paul Bolle pebo...@tiscali.nl
Cc: Kumar Gala ga...@kernel.crashing.org
Cc: Josh Boyer jwbo...@gmail.com
Cc: linux ppc dev linuxppc-dev@lists.ozlabs.org
---

 arch/powerpc/Kconfig  |2 +-
 arch/powerpc/kernel/misc_32.S |2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 8523bd1..d7c2d1a 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -345,7 +345,7 @@ config ARCH_ENABLE_MEMORY_HOTREMOVE
 
 config KEXEC
bool kexec system call (EXPERIMENTAL)
-   depends on (PPC_BOOK3S || FSL_BOOKE || (44x  !SMP  !47x))  
EXPERIMENTAL
+   depends on (PPC_BOOK3S || FSL_BOOKE || (44x  !SMP  !PPC_47x))  
EXPERIMENTAL
help
  kexec is a system call that implements the ability to shutdown your
  current kernel, and to start another kernel.  It is like a reboot
diff --git a/arch/powerpc/kernel/misc_32.S b/arch/powerpc/kernel/misc_32.S
index f7d760a..7cd07b4 100644
--- a/arch/powerpc/kernel/misc_32.S
+++ b/arch/powerpc/kernel/misc_32.S
@@ -738,7 +738,7 @@ relocate_new_kernel:
mr  r5, r31
 
li  r0, 0
-#elif defined(CONFIG_44x)   !defined(CONFIG_47x)
+#elif defined(CONFIG_44x)   !defined(CONFIG_PPC_47x)
 
 /*
  * Code for setting up 1:1 mapping for PPC440x for KEXEC

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH v3 2/8] [booke] Rename mapping based RELOCATABLE to DYNAMIC_MEMSTART for BookE

2011-11-13 Thread Suzuki K. Poulose
The current implementation of CONFIG_RELOCATABLE in BookE is based
on mapping the page aligned kernel load address to KERNELBASE. This
approach however is not enough for platforms, where the TLB page size
is large (e.g, 256M on 44x). So we are renaming the RELOCATABLE used
currently in BookE to DYNAMIC_MEMSTART to reflect the actual method.

The CONFIG_RELOCATABLE for PPC32(BookE) based on processing of the
dynamic relocations will be introduced in the later in the patch series.

This change would allow the use of the old method of RELOCATABLE for
platforms which can afford to enforce the page alignment (platforms with
smaller TLB size).

I haven tested this change only on 440x. I don't have an FSL BookE to verify
the changes there.

Scott,
Could you please test this patch on FSL and let me know the results ?

Suggested-by: Scott Wood scottw...@freescale.com

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Cc: Scott Wood scottw...@freescale.com
Cc: Kumar Gala ga...@kernel.crashing.org
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: linux ppc dev linuxppc-dev@lists.ozlabs.org
---

 arch/powerpc/Kconfig  |   50 -
 arch/powerpc/configs/44x/iss476-smp_defconfig |2 +
 arch/powerpc/include/asm/kdump.h  |5 ++-
 arch/powerpc/include/asm/page.h   |4 +-
 arch/powerpc/kernel/crash_dump.c  |4 +-
 arch/powerpc/kernel/head_44x.S|4 ++
 arch/powerpc/kernel/head_fsl_booke.S  |2 +
 arch/powerpc/kernel/machine_kexec.c   |2 +
 arch/powerpc/kernel/prom_init.c   |2 +
 arch/powerpc/mm/44x_mmu.c |2 +
 10 files changed, 47 insertions(+), 30 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index d7c2d1a..8d4f789 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -363,7 +363,8 @@ config KEXEC
 config CRASH_DUMP
bool Build a kdump crash kernel
depends on PPC64 || 6xx || FSL_BOOKE
-   select RELOCATABLE if PPC64 || FSL_BOOKE
+   select RELOCATABLE if PPC64
+   select DYNAMIC_MEMSTART if FSL_BOOKE
help
  Build a kernel suitable for use as a kdump capture kernel.
  The same kernel binary can be used as production kernel and dump
@@ -841,23 +842,36 @@ config LOWMEM_CAM_NUM
int Number of CAMs to use to map low memory if LOWMEM_CAM_NUM_BOOL
default 3
 
-config RELOCATABLE
-   bool Build a relocatable kernel (EXPERIMENTAL)
+config DYNAMIC_MEMSTART
+   bool Enable page aligned dynamic load address for kernel 
(EXPERIMENTAL)
depends on EXPERIMENTAL  ADVANCED_OPTIONS  FLATMEM  (FSL_BOOKE || 
PPC_47x)
help
- This builds a kernel image that is capable of running at the
- location the kernel is loaded at (some alignment restrictions may
- exist).
-
- One use is for the kexec on panic case where the recovery kernel
- must live at a different physical address than the primary
- kernel.
-
- Note: If CONFIG_RELOCATABLE=y, then the kernel runs from the address
- it has been loaded at and the compile time physical addresses
- CONFIG_PHYSICAL_START is ignored.  However CONFIG_PHYSICAL_START
- setting can still be useful to bootwrappers that need to know the
- load location of the kernel (eg. u-boot/mkimage).
+ This option enables the kernel to be loaded at any page aligned
+ physical address. The kernel creates a mapping from KERNELBASE to 
+ the address where the kernel is loaded.
+
+ DYNAMIC_MEMSTART is an easy way of implementing pseudo-RELOCATABLE
+ kernel image, where the only restriction is the page aligned kernel
+ load address. When this option is enabled, the compile time physical 
+ address CONFIG_PHYSICAL_START is ignored.
+
+# Mapping based RELOCATABLE is moved to DYNAMIC_MEMSTART
+# config RELOCATABLE
+#  bool Build a relocatable kernel (EXPERIMENTAL)
+#  depends on EXPERIMENTAL  ADVANCED_OPTIONS  FLATMEM  (FSL_BOOKE || 
PPC_47x)
+#  help
+#This builds a kernel image that is capable of running at the
+#location the kernel is loaded at, without any alignment restrictions.
+#
+#One use is for the kexec on panic case where the recovery kernel
+#must live at a different physical address than the primary
+#kernel.
+#
+#Note: If CONFIG_RELOCATABLE=y, then the kernel runs from the address
+#it has been loaded at and the compile time physical addresses
+#CONFIG_PHYSICAL_START is ignored.  However CONFIG_PHYSICAL_START
+#setting can still be useful to bootwrappers that need to know the
+#load location of the kernel (eg. u-boot/mkimage).
 
 config PAGE_OFFSET_BOOL
bool Set custom page offset address
@@ -887,7 +901,7 @@ config KERNEL_START_BOOL
 config KERNEL_START
hex

[PATCH v3 3/8] [44x] Enable DYNAMIC_MEMSTART for 440x

2011-11-13 Thread Suzuki K. Poulose
DYNAMIC_MEMSTART(old RELOCATABLE) was restricted only to PPC_47x variants
of 44x. This patch enables DYNAMIC_MEMSTART for 440x based chipsets.

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Cc: Josh Boyer jwbo...@gmail.com
Cc: Kumar Gala ga...@kernel.crashing.org
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: linux ppc dev linuxppc-dev@lists.ozlabs.org
---

 arch/powerpc/Kconfig   |2 +-
 arch/powerpc/kernel/head_44x.S |   12 
 2 files changed, 13 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 8d4f789..076782d 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -844,7 +844,7 @@ config LOWMEM_CAM_NUM
 
 config DYNAMIC_MEMSTART
bool Enable page aligned dynamic load address for kernel 
(EXPERIMENTAL)
-   depends on EXPERIMENTAL  ADVANCED_OPTIONS  FLATMEM  (FSL_BOOKE || 
PPC_47x)
+   depends on EXPERIMENTAL  ADVANCED_OPTIONS  FLATMEM  (FSL_BOOKE || 
44x)
help
  This option enables the kernel to be loaded at any page aligned
  physical address. The kernel creates a mapping from KERNELBASE to 
diff --git a/arch/powerpc/kernel/head_44x.S b/arch/powerpc/kernel/head_44x.S
index d5f787d..62a4cd5 100644
--- a/arch/powerpc/kernel/head_44x.S
+++ b/arch/powerpc/kernel/head_44x.S
@@ -802,12 +802,24 @@ skpinv:   addir4,r4,1 /* 
Increment */
 /*
  * Configure and load pinned entry into TLB slot 63.
  */
+#ifdef CONFIG_DYNAMIC_MEMSTART
+
+   /* Read the XLAT entry for our current mapping */
+   tlbre   r25,r23,PPC44x_TLB_XLAT
+
+   lis r3,KERNELBASE@h
+   ori r3,r3,KERNELBASE@l
+
+   /* Use our current RPN entry */
+   mr  r4,r25
+#else
 
lis r3,PAGE_OFFSET@h
ori r3,r3,PAGE_OFFSET@l
 
/* Kernel is at the base of RAM */
li r4, 0/* Load the kernel physical address */
+#endif
 
/* Load the kernel PID = 0 */
li  r0,0

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH v3 4/8] [ppc] Process dynamic relocations for kernel

2011-11-13 Thread Suzuki K. Poulose
The following patch implements the dynamic relocation processing for
PPC32 kernel. relocate() accepts the target virtual address and relocates
 the kernel image to the same.

Currently the following relocation types are handled :

R_PPC_RELATIVE
R_PPC_ADDR16_LO
R_PPC_ADDR16_HI
R_PPC_ADDR16_HA

The last 3 relocations in the above list depends on value of Symbol indexed
whose index is encoded in the Relocation entry. Hence we need the Symbol
Table for processing such relocations.

Note: The GNU ld for ppc32 produces buggy relocations for relocation types
that depend on symbols. The value of the symbols with STB_LOCAL scope
should be assumed to be zero. - Alan Modra

Changes since v2:
  * Flush the d-cache'd instructions and invalidate the i-cache to reflect
the processed instructions.(Reported by: Josh Poimboeuf)

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Signed-off-by: Josh Poimboeuf jpoim...@linux.vnet.ibm.com
Cc: Paul Mackerras pau...@samba.org
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: Alan Modra amo...@au1.ibm.com
Cc: Kumar Gala ga...@kernel.crashing.org
Cc: linuxppc-dev linuxppc-dev@lists.ozlabs.org
---

 arch/powerpc/Kconfig  |   41 ---
 arch/powerpc/Makefile |6 +
 arch/powerpc/kernel/Makefile  |2 
 arch/powerpc/kernel/reloc_32.S|  207 +
 arch/powerpc/kernel/vmlinux.lds.S |8 +
 5 files changed, 243 insertions(+), 21 deletions(-)
 create mode 100644 arch/powerpc/kernel/reloc_32.S

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 076782d..a976f75 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -855,23 +855,30 @@ config DYNAMIC_MEMSTART
  load address. When this option is enabled, the compile time physical 
  address CONFIG_PHYSICAL_START is ignored.
 
-# Mapping based RELOCATABLE is moved to DYNAMIC_MEMSTART
-# config RELOCATABLE
-#  bool Build a relocatable kernel (EXPERIMENTAL)
-#  depends on EXPERIMENTAL  ADVANCED_OPTIONS  FLATMEM  (FSL_BOOKE || 
PPC_47x)
-#  help
-#This builds a kernel image that is capable of running at the
-#location the kernel is loaded at, without any alignment restrictions.
-#
-#One use is for the kexec on panic case where the recovery kernel
-#must live at a different physical address than the primary
-#kernel.
-#
-#Note: If CONFIG_RELOCATABLE=y, then the kernel runs from the address
-#it has been loaded at and the compile time physical addresses
-#CONFIG_PHYSICAL_START is ignored.  However CONFIG_PHYSICAL_START
-#setting can still be useful to bootwrappers that need to know the
-#load location of the kernel (eg. u-boot/mkimage).
+ This option is overridden by RELOCATABLE.
+
+config RELOCATABLE
+   bool Build a relocatable kernel (EXPERIMENTAL)
+   depends on EXPERIMENTAL  ADVANCED_OPTIONS  FLATMEM
+   help
+ This builds a kernel image that is capable of running at the
+ location the kernel is loaded at, without any alignment restrictions.
+ This feature is a superset of DYNAMIC_MEMSTART, and hence overrides 
+ it.
+
+ One use is for the kexec on panic case where the recovery kernel
+ must live at a different physical address than the primary
+ kernel.
+
+ Note: If CONFIG_RELOCATABLE=y, then the kernel runs from the address
+ it has been loaded at and the compile time physical addresses
+ CONFIG_PHYSICAL_START is ignored.  However CONFIG_PHYSICAL_START
+ setting can still be useful to bootwrappers that need to know the
+ load address of the kernel (eg. u-boot/mkimage).
+
+config RELOCATABLE_PPC32
+   def_bool y
+   depends on PPC32  RELOCATABLE
 
 config PAGE_OFFSET_BOOL
bool Set custom page offset address
diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
index 57af16e..435ecb8 100644
--- a/arch/powerpc/Makefile
+++ b/arch/powerpc/Makefile
@@ -63,9 +63,9 @@ override CC   += -m$(CONFIG_WORD_SIZE)
 override AR:= GNUTARGET=elf$(CONFIG_WORD_SIZE)-powerpc $(AR)
 endif
 
-LDFLAGS_vmlinux-yy := -Bstatic
-LDFLAGS_vmlinux-$(CONFIG_PPC64)$(CONFIG_RELOCATABLE) := -pie
-LDFLAGS_vmlinux:= $(LDFLAGS_vmlinux-yy)
+LDFLAGS_vmlinux-y := -Bstatic
+LDFLAGS_vmlinux-$(CONFIG_RELOCATABLE) := -pie
+LDFLAGS_vmlinux:= $(LDFLAGS_vmlinux-y)
 
 CFLAGS-$(CONFIG_PPC64) := -mminimal-toc -mtraceback=no -mcall-aixdesc
 CFLAGS-$(CONFIG_PPC32) := -ffixed-r2 -mmultiple
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index ce4f7f1..ee728e4 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -85,6 +85,8 @@ extra-$(CONFIG_FSL_BOOKE) := head_fsl_booke.o
 extra-$(CONFIG_8xx):= head_8xx.o
 extra-y+= vmlinux.lds
 
+obj-$(CONFIG_RELOCATABLE_PPC32)+= reloc_32.o

[PATCH v3 5/8] [ppc] Define virtual-physical translations for RELOCATABLE

2011-11-13 Thread Suzuki K. Poulose
We find the runtime address of _stext and relocate ourselves based
on the following calculation.

virtual_base = ALIGN(KERNELBASE,KERNEL_TLB_PIN_SIZE) +
MODULO(_stext.run,KERNEL_TLB_PIN_SIZE)

relocate() is called with the Effective Virtual Base Address (as
shown below)

| Phys. Addr| Virt. Addr |
Page||
Boundary|   ||
|   ||
|   ||
Kernel Load |___|_ __ _ _ _ _|- Effective
Addr(_stext)|   |  ^ |Virt. Base Addr
|   |  | |
|   |  | |
|   |reloc_offset|
|   |  | |
|   |  | |
|   |__v_|-(KERNELBASE)%TLB_SIZE
|   ||
|   ||
|   ||
Page|---||
Boundary|   ||


On BookE, we need __va()  __pa() early in the boot process to access
the device tree.

Currently this has been defined as :

#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) -
PHYSICAL_START + KERNELBASE)
where:
 PHYSICAL_START is kernstart_addr - a variable updated at runtime.
 KERNELBASE is the compile time Virtual base address of kernel.

This won't work for us, as kernstart_addr is dynamic and will yield different
results for __va()/__pa() for same mapping.

e.g.,

Let the kernel be loaded at 64MB and KERNELBASE be 0xc000 (same as
PAGE_OFFSET).

In this case, we would be mapping 0 to 0xc000, and kernstart_addr = 64M

Now __va(1MB) = (0x10) - (0x400) + 0xc000
= 0xbc10 , which is wrong.

it should be : 0xc000 + 0x10 = 0xc010

On platforms which support AMP, like PPC_47x (based on 44x), the kernel
could be loaded at highmem. Hence we cannot always depend on the compile
time constants for mapping.

Here are the possible solutions:

1) Update kernstart_addr(PHSYICAL_START) to match the Physical address of
compile time KERNELBASE value, instead of the actual Physical_Address(_stext).

The disadvantage is that we may break other users of PHYSICAL_START. They
could be replaced with __pa(_stext).

2) Redefine __va()  __pa() with relocation offset


#ifdef  CONFIG_RELOCATABLE_PPC32
#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) - PHYSICAL_START + 
(KERNELBASE + RELOC_OFFSET)))
#define __pa(x) ((unsigned long)(x) + PHYSICAL_START - (KERNELBASE + 
RELOC_OFFSET))
#endif

where, RELOC_OFFSET could be

  a) A variable, say relocation_offset (like kernstart_addr), updated
 at boot time. This impacts performance, as we have to load an additional
 variable from memory.

OR

  b) #define RELOC_OFFSET ((PHYSICAL_START  PPC_PIN_SIZE_OFFSET_MASK) - \
  (KERNELBASE  PPC_PIN_SIZE_OFFSET_MASK))

   This introduces more calculations for doing the translation.

3) Redefine __va()  __pa() with a new variable

i.e,

#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) + VIRT_PHYS_OFFSET))

where VIRT_PHYS_OFFSET :

#ifdef CONFIG_RELOCATABLE_PPC32
#define VIRT_PHYS_OFFSET virt_phys_offset
#else
#define VIRT_PHYS_OFFSET (KERNELBASE - PHYSICAL_START)
#endif /* CONFIG_RELOCATABLE_PPC32 */

where virt_phy_offset is updated at runtime to :

Effective KERNELBASE - kernstart_addr.

Taking our example, above:

virt_phys_offset = effective_kernelstart_vaddr - kernstart_addr
 = 0xc040 - 0x40
 = 0xc000
and

__va(0x10) = 0xc000 + 0x10 = 0xc010
 which is what we want.

I have implemented (3) in the following patch which has same cost of
operation as the existing one.

I have tested the patches on 440x platforms only. However this should
work fine for PPC_47x also, as we only depend on the runtime address
and the current TLB XLAT entry for the startup code, which is available
in r25. I don't have access to a 47x board yet. So, it would be great if
somebody could test this on 47x.

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: Kumar Gala ga...@kernel.crashing.org
Cc: linuxppc-dev linuxppc-dev@lists.ozlabs.org
---

 arch/powerpc/include/asm/page.h |   85 ++-
 arch/powerpc/mm/init_32.c   |7 +++
 2 files changed, 89 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
index 97cfe86..a8d0888 100644
--- a/arch/powerpc/include/asm/page.h
+++ b/arch/powerpc/include/asm/page.h
@@ -97,12 +97,26 @@ extern unsigned int HPAGE_SHIFT;
 
 extern phys_addr_t memstart_addr;
 extern phys_addr_t kernstart_addr;
+
+#ifdef CONFIG_RELOCATABLE_PPC32
+extern long long virt_phys_offset;
 #endif
+
+#endif

[PATCH v3 1/8] [44x] Fix typo in KEXEC Kconfig dependency

2011-11-13 Thread Suzuki K. Poulose
Kexec is not supported on 47x. 47x is a variant of 44x with slightly
different MMU and SMP support. There was a typo in the Kconfig
dependency for KEXEC. This patch fixes the same.

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Cc: Kumar Gala ga...@kernel.crashing.org
Cc: Josh Boyer jwbo...@gmail.com
Cc: linux ppc dev linuxppc-dev@lists.ozlabs.org
---

 arch/powerpc/Kconfig |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 8523bd1..d7c2d1a 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -345,7 +345,7 @@ config ARCH_ENABLE_MEMORY_HOTREMOVE
 
 config KEXEC
bool kexec system call (EXPERIMENTAL)
-   depends on (PPC_BOOK3S || FSL_BOOKE || (44x  !SMP  !47x))  
EXPERIMENTAL
+   depends on (PPC_BOOK3S || FSL_BOOKE || (44x  !SMP  !PPC_47x))  
EXPERIMENTAL
help
  kexec is a system call that implements the ability to shutdown your
  current kernel, and to start another kernel.  It is like a reboot

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH v3 6/8] [44x] Enable CONFIG_RELOCATABLE for PPC44x

2011-11-13 Thread Suzuki K. Poulose
The following patch adds relocatable support for PPC44x kernel.

This enables two types of relocatable kernel support for PPC44x.

1) The old style, mapping based- which restricts the load address to 256M
   aligned.

2) The new approach based on processing dynamic relocation entries -
   CONFIG_RELOCATABLE_PPC32_PIE


In case of CONFIG_RELOCATABLE_PPC32_PIE :

We find the runtime address of _stext and relocate ourselves based
on the following calculation.

virtual_base = ALIGN(KERNELBASE,256M) +
MODULO(_stext.run,256M)

relocate() is called with the Effective Virtual Base Address (as
shown below)

| Phys. Addr| Virt. Addr |
Page (256M) ||
Boundary|   ||
|   ||
|   ||
Kernel Load |___|_ __ _ _ _ _|- Effective
Addr(_stext)|   |  ^ |Virt. Base Addr
|   |  | |
|   |  | |
|   |reloc_offset|
|   |  | |
|   |  | |
|   |__v_|-(KERNELBASE)%256M
|   ||
|   ||
|   ||
Page(256M)  |---||
Boundary|   ||

The virt_phys_offset is updated accordingly, i.e,

virt_phys_offset = effective. kernel virt base - kernstart_addr

I have tested the patches on 440x platforms only. However this should
work fine for PPC_47x also, as we only depend on the runtime address
and the current TLB XLAT entry for the startup code, which is available
in r25. I don't have access to a 47x board yet. So, it would be great if
somebody could test this on 47x.

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: Kumar Gala ga...@kernel.crashing.org
Cc: Tony Breeds t...@bakeyournoodle.com
Cc: Josh Boyer jwbo...@gmail.com
Cc: linuxppc-dev linuxppc-dev@lists.ozlabs.org
---

 arch/powerpc/Kconfig   |2 -
 arch/powerpc/kernel/head_44x.S |   90 +++-
 2 files changed, 89 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index a976f75..7923520 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -859,7 +859,7 @@ config DYNAMIC_MEMSTART
 
 config RELOCATABLE
bool Build a relocatable kernel (EXPERIMENTAL)
-   depends on EXPERIMENTAL  ADVANCED_OPTIONS  FLATMEM
+   depends on EXPERIMENTAL  ADVANCED_OPTIONS  FLATMEM  44x
help
  This builds a kernel image that is capable of running at the
  location the kernel is loaded at, without any alignment restrictions.
diff --git a/arch/powerpc/kernel/head_44x.S b/arch/powerpc/kernel/head_44x.S
index 62a4cd5..7672f2c 100644
--- a/arch/powerpc/kernel/head_44x.S
+++ b/arch/powerpc/kernel/head_44x.S
@@ -64,6 +64,35 @@ _ENTRY(_start);
mr  r31,r3  /* save device tree ptr */
li  r24,0   /* CPU number */
 
+#ifdef CONFIG_RELOCATABLE
+/*
+ * Relocate ourselves to the current runtime address.
+ * This is called only by the Boot CPU.
+ * relocate is called with our current runtime virutal
+ * address.
+ * r21 will be loaded with the physical runtime address of _stext
+ */
+   bl  0f  /* Get our runtime address */
+0: mflrr21 /* Make it accessible */
+   addis   r21,r21,(_stext - 0b)@ha
+   addir21,r21,(_stext - 0b)@l /* Get our current runtime base 
*/
+
+   /*
+* We have the runtime (virutal) address of our base.
+* We calculate our shift of offset from a 256M page.
+* We could map the 256M page we belong to at PAGE_OFFSET and
+* get going from there.
+*/
+   lis r4,KERNELBASE@h
+   ori r4,r4,KERNELBASE@l
+   rlwinm  r6,r21,0,4,31   /* r6 = PHYS_START % 256M */
+   rlwinm  r5,r4,0,4,31/* r5 = KERNELBASE % 256M */
+   subfr3,r5,r6/* r3 = r6 - r5 */
+   add r3,r4,r3/* Required Virutal Address */
+
+   bl  relocate
+#endif
+
bl  init_cpu_state
 
/*
@@ -86,7 +115,64 @@ _ENTRY(_start);
 
bl  early_init
 
-#ifdef CONFIG_DYNAMIC_MEMSTART
+#ifdef CONFIG_RELOCATABLE
+   /*
+* Relocatable kernel support based on processing of dynamic
+* relocation entries.
+*
+* r25 will contain RPN/ERPN for the start address of memory
+* r21 will contain the current offset of _stext
+*/
+   lis r3,kernstart_addr@ha
+   la  r3,kernstart_addr@l(r3)
+
+   /*
+* Compute the kernstart_addr.
+* kernstart_addr = (r6,r8)
+* kernstart_addr  ~0xfff

[PATCH v3 8/8] [boot] Change the load address for the wrapper to fit the kernel

2011-11-13 Thread Suzuki K. Poulose
The wrapper code which uncompresses the kernel in case of a 'ppc' boot
is by default loaded at 0x0040 and the kernel will be uncompressed
to fit the location 0-0x0040. But with dynamic relocations, the size
of the kernel may exceed 0x0040(4M). This would cause an overlap
of the uncompressed kernel and the boot wrapper, causing a failure in
boot.

The message looks like :


   zImage starting: loaded at 0x0040 (sp: 0x0065ffb0)
   Allocating 0x5ce650 bytes for kernel ...
   Insufficient memory for kernel at address 0! (_start=0040, uncompressed 
size=00591a20)

This patch shifts the load address of the boot wrapper code to the next higher 
MB,
according to the size of  the uncompressed vmlinux.

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
---

 arch/powerpc/boot/wrapper |   20 
 1 files changed, 20 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/boot/wrapper b/arch/powerpc/boot/wrapper
index c74531a..213a9fd 100755
--- a/arch/powerpc/boot/wrapper
+++ b/arch/powerpc/boot/wrapper
@@ -257,6 +257,8 @@ vmz=$tmpdir/`basename \$kernel\`.$ext
 if [ -z $cacheit -o ! -f $vmz$gzip -o $vmz$gzip -ot $kernel ]; then
 ${CROSS}objcopy $objflags $kernel $vmz.$$
 
+strip_size=$(stat -c %s $vmz.$$)
+
 if [ -n $gzip ]; then
 gzip -n -f -9 $vmz.$$
 fi
@@ -266,6 +268,24 @@ if [ -z $cacheit -o ! -f $vmz$gzip -o $vmz$gzip -ot 
$kernel ]; then
 else
vmz=$vmz.$$
 fi
+else
+# Calculate the vmlinux.strip size
+${CROSS}objcopy $objflags $kernel $vmz.$$
+strip_size=$(stat -c %s $vmz.$$)
+rm -f $vmz.$$
+fi
+
+# Round the size to next higher MB limit
+round_size=$(((strip_size + 0xf)  0xfff0))
+
+round_size=0x$(printf %x\n $round_size)
+link_addr=$(printf %d\n $link_address)
+
+if [ $link_addr -lt $strip_size ]; then
+echo WARN: Uncompressed kernel size(0x$(printf %x\n $strip_size)) \
+exceeds the address of the wrapper($link_address)
+echo WARN: Fixing the link_address to ($round_size))
+link_address=$round_size
 fi
 
 vmz=$vmz$gzip

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH v3 7/8] [44x] Enable CRASH_DUMP for 440x

2011-11-13 Thread Suzuki K. Poulose
Now that we have relocatable kernel, supporting CRASH_DUMP only requires
turning the switches on for UP machines.

We don't have kexec support on 47x yet. Enabling SMP support would be done
as part of enabling the PPC_47x support.


Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Cc: Josh Boyer jwbo...@gmail.com
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: linuxppc-dev linuxppc-dev@lists.ozlabs.org
---

 arch/powerpc/Kconfig |4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 7923520..d3fe852 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -362,8 +362,8 @@ config KEXEC
 
 config CRASH_DUMP
bool Build a kdump crash kernel
-   depends on PPC64 || 6xx || FSL_BOOKE
-   select RELOCATABLE if PPC64
+   depends on PPC64 || 6xx || FSL_BOOKE || (44x  !SMP  !PPC_47x)
+   select RELOCATABLE if PPC64 || 44x
select DYNAMIC_MEMSTART if FSL_BOOKE
help
  Build a kernel suitable for use as a kdump capture kernel.

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH v3 0/8] Kudmp support for PPC440x

2011-11-13 Thread Suzuki K. Poulose
The following series implements:

 * Generic framework for relocatable kernel on PPC32, based on processing 
   the dynamic relocation entries.
 * Relocatable kernel support for 44x
 * Kdump support for 44x. Doesn't support 47x yet, as the kexec 
   support is missing.

Changes from V2:

 * Renamed old style mapping based RELOCATABLE on BookE to DYNAMIC_MEMSTART.
   Suggested by: Scott Wood
 * Added support for DYNAMIC_MEMSTART on PPC440x
 * Reverted back to RELOCATABLE and RELOCATABLE_PPC32 from RELOCATABLE_PPC32_PIE
   for relocation based on processing dynamic reloc entries for PPC32.
 * Ensure the modified instructions are flushed and the i-cache invalidated at
   the end of relocate(). - Reported by : Josh Poimboeuf

Changes from V1:

 * Splitted patch 'Enable CONFIG_RELOCATABLE for PPC44x' to move some
   of the generic bits to a new patch.
 * Renamed RELOCATABLE_PPC32 to RELOCATABLE_PPC32_PIE and provided options to
   retained old style mapping. (Suggested by: Scott Wood)
 * Added support for avoiding the overlapping of uncompressed kernel
   with boot wrapper for PPC images.

The patches are based on -next tree for ppc.

I have tested these patches on Ebony, Sequoia and Virtex(QEMU Emulated).
I haven't tested the RELOCATABLE bits on PPC_47x yet, as I don't have access
to one. However, it should work fine there as we only depend on the runtime
address and the XLAT entry setup by the boot loader. It would be great if
somebody could test these patches on a 47x.

---

Suzuki K. Poulose (8):
  [boot] Change the load address for the wrapper to fit the kernel
  [44x] Enable CRASH_DUMP for 440x
  [44x] Enable CONFIG_RELOCATABLE for PPC44x
  [ppc] Define virtual-physical translations for RELOCATABLE
  [ppc] Process dynamic relocations for kernel
  [44x] Enable DYNAMIC_MEMSTART for 440x
  [booke] Rename mapping based RELOCATABLE to DYNAMIC_MEMSTART for BookE
  [44x] Fix typo in KEXEC Kconfig dependency


 arch/powerpc/Kconfig  |   39 -
 arch/powerpc/Makefile |6 -
 arch/powerpc/boot/wrapper |   20 ++
 arch/powerpc/configs/44x/iss476-smp_defconfig |2 
 arch/powerpc/include/asm/kdump.h  |5 -
 arch/powerpc/include/asm/page.h   |   89 ++-
 arch/powerpc/kernel/Makefile  |2 
 arch/powerpc/kernel/crash_dump.c  |4 
 arch/powerpc/kernel/head_44x.S|  100 
 arch/powerpc/kernel/head_fsl_booke.S  |2 
 arch/powerpc/kernel/machine_kexec.c   |2 
 arch/powerpc/kernel/prom_init.c   |2 
 arch/powerpc/kernel/reloc_32.S|  207 +
 arch/powerpc/kernel/vmlinux.lds.S |8 +
 arch/powerpc/mm/44x_mmu.c |2 
 arch/powerpc/mm/init_32.c |7 +
 16 files changed, 470 insertions(+), 27 deletions(-)
 create mode 100644 arch/powerpc/kernel/reloc_32.S

--
Suzuki
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH v2 0/5] Kdump support for PPC440x

2011-10-25 Thread Suzuki K. Poulose
The following series implements:

 * Generic framework for relocatable kernel on PPC32, based on processing 
   the dynamic relocation entries.
 * Relocatable kernel support for 44x
 * Kdump support for 44x. Doesn't support 47x yet, as the kexec 
   support is missing.

Changes from V1:

  * Splitted patch 'Enable CONFIG_RELOCATABLE for PPC44x' to move some
of the generic bits to a new patch.
  * Renamed RELOCATABLE_PPC32 to RELOCATABLE_PPC32_PIE and provided options to
retained old style mapping. (Suggested by: Scott Wood)
  * Added support for avoiding the overlapping of uncompressed kernel
with boot wrapper for PPC images.

The patches are based on -next tree for ppc.

I have tested these patches on Ebony, Sequoia and Virtex(QEMU Emulated).
I haven't tested the RELOCATABLE bits on PPC_47x yet, as I don't have access
to one. However, it should work fine there as we only depend on the runtime
address and the XLAT entry setup by the boot loader. It would be great if
somebody could test these patches on a 47x.


---

Suzuki K. Poulose (5):
  [boot] Change the load address for the wrapper to fit the kernel
  [44x] Enable CRASH_DUMP for 440x
  [44x] Enable CONFIG_RELOCATABLE for PPC44x
  [ppc] Define virtual-physical translations for PIE relocations
  [ppc] Process dynamic relocations for kernel


 arch/powerpc/Kconfig  |   18 +++
 arch/powerpc/Makefile |1 
 arch/powerpc/boot/wrapper |   20 
 arch/powerpc/include/asm/page.h   |   85 
 arch/powerpc/kernel/Makefile  |2 
 arch/powerpc/kernel/head_44x.S|  110 -
 arch/powerpc/kernel/reloc_32.S|  194 +
 arch/powerpc/kernel/vmlinux.lds.S |8 +-
 arch/powerpc/mm/init_32.c |7 +
 9 files changed, 434 insertions(+), 11 deletions(-)
 create mode 100644 arch/powerpc/kernel/reloc_32.S

--
Suzuki Poulose

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH v2 1/5] [ppc] Process dynamic relocations for kernel

2011-10-25 Thread Suzuki K. Poulose
The following patch implements the dynamic relocation processing for
PPC32 kernel. relocate() accepts the target virtual address and relocates
 the kernel image to the same.

Currently the following relocation types are handled :

R_PPC_RELATIVE
R_PPC_ADDR16_LO
R_PPC_ADDR16_HI
R_PPC_ADDR16_HA

The last 3 relocations in the above list depends on value of Symbol indexed
whose index is encoded in the Relocation entry. Hence we need the Symbol
Table for processing such relocations.

Note: The GNU ld for ppc32 produces buggy relocations for relocation types
that depend on symbols. The value of the symbols with STB_LOCAL scope
should be assumed to be zero. - Alan Modra

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Cc: Paul Mackerras pau...@samba.org
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: Alan Modra amo...@au1.ibm.com
Cc: Kumar Gala ga...@kernel.crashing.org
Cc: Josh Boyer jwbo...@gmail.com
Cc: linuxppc-dev linuxppc-dev@lists.ozlabs.org
---

 arch/powerpc/Kconfig  |   12 ++
 arch/powerpc/kernel/Makefile  |2 
 arch/powerpc/kernel/reloc_32.S|  194 +
 arch/powerpc/kernel/vmlinux.lds.S |8 +-
 4 files changed, 215 insertions(+), 1 deletions(-)
 create mode 100644 arch/powerpc/kernel/reloc_32.S

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 8523bd1..016f863 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -859,6 +859,18 @@ config RELOCATABLE
  setting can still be useful to bootwrappers that need to know the
  load location of the kernel (eg. u-boot/mkimage).
 
+config RELOCATABLE_PPC32_PIE
+   bool Compile the kernel with dynamic relocations (EXPERIMENTAL)
+   default n
+   depends on PPC32  RELOCATABLE
+   help
+ This option builds the kernel with dynamic relocations(-pie). Enables
+ the kernel to be loaded at any address for BOOKE processors, removing
+ the page alignment restriction for the load address.
+
+ The option is more useful for platforms where the TLB page size is
+ big (e.g, 256M on 44x), where we cannot enforce the alignment.
+
 config PAGE_OFFSET_BOOL
bool Set custom page offset address
depends on ADVANCED_OPTIONS
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index ce4f7f1..0957570 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -85,6 +85,8 @@ extra-$(CONFIG_FSL_BOOKE) := head_fsl_booke.o
 extra-$(CONFIG_8xx):= head_8xx.o
 extra-y+= vmlinux.lds
 
+obj-$(CONFIG_RELOCATABLE_PPC32_PIE)+= reloc_32.o
+
 obj-$(CONFIG_PPC32)+= entry_32.o setup_32.o
 obj-$(CONFIG_PPC64)+= dma-iommu.o iommu.o
 obj-$(CONFIG_KGDB) += kgdb.o
diff --git a/arch/powerpc/kernel/reloc_32.S b/arch/powerpc/kernel/reloc_32.S
new file mode 100644
index 000..045d61e
--- /dev/null
+++ b/arch/powerpc/kernel/reloc_32.S
@@ -0,0 +1,194 @@
+/*
+ * Code to process dynamic relocations for PPC32.
+ *
+ * Copyrights (C) IBM Corporation, 2011.
+ * Author: Suzuki Poulose suz...@in.ibm.com
+ *
+ *  - Based on ppc64 code - reloc_64.S
+ *
+ *  This program is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU General Public License
+ *  as published by the Free Software Foundation; either version
+ *  2 of the License, or (at your option) any later version.
+ */
+
+#include asm/ppc_asm.h
+
+/* Dynamic section table entry tags */
+DT_RELA = 7/* Tag for Elf32_Rela section */
+DT_RELASZ = 8  /* Size of the Rela relocs */
+DT_RELAENT = 9 /* Size of one Rela reloc entry */
+
+STN_UNDEF = 0  /* Undefined symbol index */
+STB_LOCAL = 0  /* Local binding for the symbol */
+
+R_PPC_ADDR16_LO = 4/* Lower half of (S+A) */
+R_PPC_ADDR16_HI = 5/* Upper half of (S+A) */
+R_PPC_ADDR16_HA = 6/* High Adjusted (S+A) */
+R_PPC_RELATIVE = 22
+
+/*
+ * r3 = desired final address
+ */
+
+_GLOBAL(relocate)
+
+   mflrr0
+   bl  0f  /* Find our current runtime address */
+0: mflrr12 /* Make it accessible */
+   mtlrr0
+
+   lwz r11, (p_dyn - 0b)(r12)
+   add r11, r11, r12   /* runtime address of .dynamic section */
+   lwz r9, (p_rela - 0b)(r12)
+   add r9, r9, r12 /* runtime address of .rela.dyn section */
+   lwz r10, (p_st - 0b)(r12)
+   add r10, r10, r12   /* runtime address of _stext section */
+   lwz r13, (p_sym - 0b)(r12)
+   add r13, r13, r12   /* runtime address of .dynsym section */
+
+   /*
+* Scan the dynamic section for RELA, RELASZ entries
+*/
+   li  r6, 0
+   li  r7, 0
+   li  r8, 0
+1: lwz r5, 0(r11)  /* ELF_Dyn.d_tag

[PATCH v2 2/5] [ppc] Define virtual-physical translations for PIE relocations

2011-10-25 Thread Suzuki K. Poulose
We find the runtime address of _stext and relocate ourselves based
on the following calculation.

virtual_base = ALIGN(KERNELBASE,KERNEL_TLB_PIN_SIZE) +
MODULO(_stext.run,KERNEL_TLB_PIN_SIZE)

relocate() is called with the Effective Virtual Base Address (as
shown below)

| Phys. Addr| Virt. Addr |
Page||
Boundary|   ||
|   ||
|   ||
Kernel Load |___|_ __ _ _ _ _|- Effective
Addr(_stext)|   |  ^ |Virt. Base Addr
|   |  | |
|   |  | |
|   |reloc_offset|
|   |  | |
|   |  | |
|   |__v_|-(KERNELBASE)%TLB_SIZE
|   ||
|   ||
|   ||
Page|---||
Boundary|   ||


On BookE, we need __va()  __pa() early in the boot process to access
the device tree.

Currently this has been defined as :

#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) -
PHYSICAL_START + KERNELBASE)
where:
 PHYSICAL_START is kernstart_addr - a variable updated at runtime.
 KERNELBASE is the compile time Virtual base address of kernel.

This won't work for us, as kernstart_addr is dynamic and will yield different
results for __va()/__pa() for same mapping.

e.g.,

Let the kernel be loaded at 64MB and KERNELBASE be 0xc000 (same as
PAGE_OFFSET).

In this case, we would be mapping 0 to 0xc000, and kernstart_addr = 64M

Now __va(1MB) = (0x10) - (0x400) + 0xc000
= 0xbc10 , which is wrong.

it should be : 0xc000 + 0x10 = 0xc010

On platforms which support AMP, like PPC_47x (based on 44x), the kernel
could be loaded at highmem. Hence we cannot always depend on the compile
time constants for mapping.

Here are the possible solutions:

1) Update kernstart_addr(PHSYICAL_START) to match the Physical address of
compile time KERNELBASE value, instead of the actual Physical_Address(_stext).

The disadvantage is that we may break other users of PHYSICAL_START. They
could be replaced with __pa(_stext).

2) Redefine __va()  __pa() with relocation offset


#if defined(CONFIG_RELOCATABLE)  defined(CONFIG_44x)
#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) - PHYSICAL_START + 
(KERNELBASE + RELOC_OFFSET)))
#define __pa(x) ((unsigned long)(x) + PHYSICAL_START - (KERNELBASE + 
RELOC_OFFSET))
#endif

where, RELOC_OFFSET could be

  a) A variable, say relocation_offset (like kernstart_addr), updated
 at boot time. This impacts performance, as we have to load an additional
 variable from memory.

OR

  b) #define RELOC_OFFSET ((PHYSICAL_START  PPC_PIN_SIZE_OFFSET_MASK) - \
  (KERNELBASE  PPC_PIN_SIZE_OFFSET_MASK))

   This introduces more calculations for doing the translation.

3) Redefine __va()  __pa() with a new variable

i.e,

#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) + VIRT_PHYS_OFFSET))

where VIRT_PHYS_OFFSET :

#ifdef CONFIG_RELOCATABLE_PPC32_PIE
#define VIRT_PHYS_OFFSET virt_phys_offset
#else
#define VIRT_PHYS_OFFSET (KERNELBASE - PHYSICAL_START)
#endif /* CONFIG_RELOCATABLE_PPC32_PIE */

where virt_phy_offset is updated at runtime to :

Effective KERNELBASE - kernstart_addr.

Taking our example, above:

virt_phys_offset = effective_kernelstart_vaddr - kernstart_addr
 = 0xc040 - 0x40
 = 0xc000
and

__va(0x10) = 0xc000 + 0x10 = 0xc010
 which is what we want.

I have implemented (3) in the following patch which has same cost of
operation as the existing one.

I have tested the patches on 440x platforms only. However this should
work fine for PPC_47x also, as we only depend on the runtime address
and the current TLB XLAT entry for the startup code, which is available
in r25. I don't have access to a 47x board yet. So, it would be great if
somebody could test this on 47x.

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: Kumar Gala ga...@kernel.crashing.org
Cc: linuxppc-dev linuxppc-dev@lists.ozlabs.org
---

 arch/powerpc/Makefile   |1 
 arch/powerpc/include/asm/page.h |   85 ++-
 arch/powerpc/mm/init_32.c   |7 +++
 3 files changed, 90 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
index 57af16e..77f928f 100644
--- a/arch/powerpc/Makefile
+++ b/arch/powerpc/Makefile
@@ -65,6 +65,7 @@ endif
 
 LDFLAGS_vmlinux-yy := -Bstatic
 LDFLAGS_vmlinux-$(CONFIG_PPC64)$(CONFIG_RELOCATABLE) := -pie
+LDFLAGS_vmlinux-y$(CONFIG_RELOCATABLE_PPC32_PIE) := -pie

[PATCH v2 3/5] [44x] Enable CONFIG_RELOCATABLE for PPC44x

2011-10-25 Thread Suzuki K. Poulose
The following patch adds relocatable support for PPC44x kernel.

This enables two types of relocatable kernel support for PPC44x.

1) The old style, mapping based- which restricts the load address to 256M
   aligned.

2) The new approach based on processing dynamic relocation entries -
   CONFIG_RELOCATABLE_PPC32_PIE


In case of CONFIG_RELOCATABLE_PPC32_PIE :

We find the runtime address of _stext and relocate ourselves based
on the following calculation.

virtual_base = ALIGN(KERNELBASE,256M) +
MODULO(_stext.run,256M)

relocate() is called with the Effective Virtual Base Address (as
shown below)

| Phys. Addr| Virt. Addr |
Page (256M) ||
Boundary|   ||
|   ||
|   ||
Kernel Load |___|_ __ _ _ _ _|- Effective
Addr(_stext)|   |  ^ |Virt. Base Addr
|   |  | |
|   |  | |
|   |reloc_offset|
|   |  | |
|   |  | |
|   |__v_|-(KERNELBASE)%256M
|   ||
|   ||
|   ||
Page(256M)  |---||
Boundary|   ||

The virt_phys_offset is updated accordingly, i.e,

virt_phys_offset = effective. kernel virt base - kernstart_addr

I have tested the patches on 440x platforms only. However this should
work fine for PPC_47x also, as we only depend on the runtime address
and the current TLB XLAT entry for the startup code, which is available
in r25. I don't have access to a 47x board yet. So, it would be great if
somebody could test this on 47x.

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: Kumar Gala ga...@kernel.crashing.org
Cc: Tony Breeds t...@bakeyournoodle.com
Cc: Josh Boyer jwbo...@gmail.com
Cc: linuxppc-dev linuxppc-dev@lists.ozlabs.org
---

 arch/powerpc/Kconfig   |4 +
 arch/powerpc/kernel/head_44x.S |  110 +++-
 2 files changed, 108 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 016f863..1cedcda 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -843,7 +843,7 @@ config LOWMEM_CAM_NUM
 
 config RELOCATABLE
bool Build a relocatable kernel (EXPERIMENTAL)
-   depends on EXPERIMENTAL  ADVANCED_OPTIONS  FLATMEM  (FSL_BOOKE || 
PPC_47x)
+   depends on EXPERIMENTAL  ADVANCED_OPTIONS  FLATMEM  (FSL_BOOKE || 
44x)
help
  This builds a kernel image that is capable of running at the
  location the kernel is loaded at (some alignment restrictions may
@@ -862,7 +862,7 @@ config RELOCATABLE
 config RELOCATABLE_PPC32_PIE
bool Compile the kernel with dynamic relocations (EXPERIMENTAL)
default n
-   depends on PPC32  RELOCATABLE
+   depends on PPC32  RELOCATABLE  44x
help
  This option builds the kernel with dynamic relocations(-pie). Enables
  the kernel to be loaded at any address for BOOKE processors, removing
diff --git a/arch/powerpc/kernel/head_44x.S b/arch/powerpc/kernel/head_44x.S
index b725dab..213759e 100644
--- a/arch/powerpc/kernel/head_44x.S
+++ b/arch/powerpc/kernel/head_44x.S
@@ -64,6 +64,35 @@ _ENTRY(_start);
mr  r31,r3  /* save device tree ptr */
li  r24,0   /* CPU number */
 
+#ifdef CONFIG_RELOCATABLE_PPC32_PIE
+/*
+ * Relocate ourselves to the current runtime address.
+ * This is called only by the Boot CPU.
+ * relocate is called with our current runtime virutal
+ * address.
+ * r21 will be loaded with the physical runtime address of _stext
+ */
+   bl  0f  /* Get our runtime address */
+0: mflrr21 /* Make it accessible */
+   addis   r21,r21,(_stext - 0b)@ha
+   addir21,r21,(_stext - 0b)@l /* Get our current runtime base 
*/
+
+   /*
+* We have the runtime (virutal) address of our base.
+* We calculate our shift of offset from a 256M page.
+* We could map the 256M page we belong to at PAGE_OFFSET and
+* get going from there.
+*/
+   lis r4,KERNELBASE@h
+   ori r4,r4,KERNELBASE@l
+   rlwinm  r6,r21,0,4,31   /* r6 = PHYS_START % 256M */
+   rlwinm  r5,r4,0,4,31/* r5 = KERNELBASE % 256M */
+   subfr3,r5,r6/* r3 = r6 - r5 */
+   add r3,r4,r3/* Required Virutal Address */
+
+   bl  relocate
+#endif
+
bl  init_cpu_state
 
/*
@@ -88,14 +117,66 @@ _ENTRY(_start);
 
 #ifdef CONFIG_RELOCATABLE
/*
+* When we reach here :
 * r25

[PATCH v2 5/5] [boot] Change the load address for the wrapper to fit the kernel

2011-10-25 Thread Suzuki K. Poulose
The wrapper code which uncompresses the kernel in case of a 'ppc' boot
is by default loaded at 0x0040 and the kernel will be uncompressed
to fit the location 0-0x0040. But with dynamic relocations, the size
of the kernel may exceed 0x0040(4M). This would cause an overlap
of the uncompressed kernel and the boot wrapper, causing a failure in
boot.

The message looks like :


   zImage starting: loaded at 0x0040 (sp: 0x0065ffb0)
   Allocating 0x5ce650 bytes for kernel ...
   Insufficient memory for kernel at address 0! (_start=0040, uncompressed 
size=00591a20)

This patch shifts the load address of the boot wrapper code to the next higher 
MB,
according to the size of  the uncompressed vmlinux.

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
---

 arch/powerpc/boot/wrapper |   20 
 1 files changed, 20 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/boot/wrapper b/arch/powerpc/boot/wrapper
index c74531a..213a9fd 100755
--- a/arch/powerpc/boot/wrapper
+++ b/arch/powerpc/boot/wrapper
@@ -257,6 +257,8 @@ vmz=$tmpdir/`basename \$kernel\`.$ext
 if [ -z $cacheit -o ! -f $vmz$gzip -o $vmz$gzip -ot $kernel ]; then
 ${CROSS}objcopy $objflags $kernel $vmz.$$
 
+strip_size=$(stat -c %s $vmz.$$)
+
 if [ -n $gzip ]; then
 gzip -n -f -9 $vmz.$$
 fi
@@ -266,6 +268,24 @@ if [ -z $cacheit -o ! -f $vmz$gzip -o $vmz$gzip -ot 
$kernel ]; then
 else
vmz=$vmz.$$
 fi
+else
+# Calculate the vmlinux.strip size
+${CROSS}objcopy $objflags $kernel $vmz.$$
+strip_size=$(stat -c %s $vmz.$$)
+rm -f $vmz.$$
+fi
+
+# Round the size to next higher MB limit
+round_size=$(((strip_size + 0xf)  0xfff0))
+
+round_size=0x$(printf %x\n $round_size)
+link_addr=$(printf %d\n $link_address)
+
+if [ $link_addr -lt $strip_size ]; then
+echo WARN: Uncompressed kernel size(0x$(printf %x\n $strip_size)) \
+exceeds the address of the wrapper($link_address)
+echo WARN: Fixing the link_address to ($round_size))
+link_address=$round_size
 fi
 
 vmz=$vmz$gzip

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH v2 4/5] [44x] Enable CRASH_DUMP for 440x

2011-10-25 Thread Suzuki K. Poulose
Now that we have relocatable kernel, supporting CRASH_DUMP only requires
turning the switches on for UP machines.

We don't have kexec support on 47x yet. Enabling SMP support would be done
as part of enabling the PPC_47x support.


Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Cc: Josh Boyer jwbo...@gmail.com
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: linuxppc-dev linuxppc-dev@lists.ozlabs.org
---

 arch/powerpc/Kconfig |4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 1cedcda..4de7733 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -362,8 +362,8 @@ config KEXEC
 
 config CRASH_DUMP
bool Build a kdump crash kernel
-   depends on PPC64 || 6xx || FSL_BOOKE
-   select RELOCATABLE if PPC64 || FSL_BOOKE
+   depends on PPC64 || 6xx || FSL_BOOKE || (44x  !SMP)
+   select RELOCATABLE if PPC64 || FSL_BOOKE || 44x
help
  Build a kernel suitable for use as a kdump capture kernel.
  The same kernel binary can be used as production kernel and dump

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH 0/3] Kdump support for PPC440x

2011-10-10 Thread Suzuki K. Poulose
The following series implements CRASH_DUMP support for PPC440x. The
patches apply on top of power-next tree. This set also adds support
for CONFIG_RELOCATABLE on 44x.

I have tested the patches on Ebony and Virtex(QEMU Emulated). Testing
these patches would require latest snapshot of kexec-tools git tree and
(preferrably) the following patch for kexec-tools :

http://lists.infradead.org/pipermail/kexec/2011-October/005552.html

---

Suzuki K. Poulose (3):
  [44x] Enable CRASH_DUMP for 440x
  [44x] Enable CONFIG_RELOCATABLE for PPC44x
  [powerpc32] Process dynamic relocations for kernel


 arch/powerpc/Kconfig  |   10 +-
 arch/powerpc/Makefile |1 
 arch/powerpc/include/asm/page.h   |   84 
 arch/powerpc/kernel/Makefile  |2 
 arch/powerpc/kernel/head_44x.S|  111 ++---
 arch/powerpc/kernel/reloc_32.S|  194 +
 arch/powerpc/kernel/vmlinux.lds.S |8 +-
 arch/powerpc/mm/init_32.c |7 +
 8 files changed, 396 insertions(+), 21 deletions(-)
 create mode 100644 arch/powerpc/kernel/reloc_32.S

-- 
Thanks
Suzuki
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH 1/3] [powerpc32] Process dynamic relocations for kernel

2011-10-10 Thread Suzuki K. Poulose
The following patch implements the dynamic relocation processing for
PPC32 kernel. relocate() accepts the target virtual address and relocates
 the kernel image to the same.

Currently the following relocation types are handled :

R_PPC_RELATIVE
R_PPC_ADDR16_LO
R_PPC_ADDR16_HI
R_PPC_ADDR16_HA

The last 3 relocations in the above list depends on value of Symbol indexed
whose index is encoded in the Relocation entry. Hence we need the Symbol
Table for processing such relocations.

Note: The GNU ld for ppc32 produces buggy relocations for relocation types
that depend on symbols. The value of the symbols with STB_LOCAL scope
should be assumed to be zero. - Alan Modra

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Cc: Paul Mackerras pau...@samba.org
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: Alan Modra amo...@au1.ibm.com
Cc: Kumar Gala ga...@kernel.crashing.org
Cc: Josh Boyer jwbo...@gmail.com
Cc: linuxppc-dev linuxppc-dev@lists.ozlabs.org
---

 arch/powerpc/Kconfig  |4 +
 arch/powerpc/kernel/Makefile  |2 
 arch/powerpc/kernel/reloc_32.S|  194 +
 arch/powerpc/kernel/vmlinux.lds.S |8 +-
 4 files changed, 207 insertions(+), 1 deletions(-)
 create mode 100644 arch/powerpc/kernel/reloc_32.S

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 8523bd1..9eb2e60 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -859,6 +859,10 @@ config RELOCATABLE
  setting can still be useful to bootwrappers that need to know the
  load location of the kernel (eg. u-boot/mkimage).
 
+config RELOCATABLE_PPC32
+   def_bool y
+   depends on PPC32  RELOCATABLE
+
 config PAGE_OFFSET_BOOL
bool Set custom page offset address
depends on ADVANCED_OPTIONS
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index ce4f7f1..ee728e4 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -85,6 +85,8 @@ extra-$(CONFIG_FSL_BOOKE) := head_fsl_booke.o
 extra-$(CONFIG_8xx):= head_8xx.o
 extra-y+= vmlinux.lds
 
+obj-$(CONFIG_RELOCATABLE_PPC32)+= reloc_32.o
+
 obj-$(CONFIG_PPC32)+= entry_32.o setup_32.o
 obj-$(CONFIG_PPC64)+= dma-iommu.o iommu.o
 obj-$(CONFIG_KGDB) += kgdb.o
diff --git a/arch/powerpc/kernel/reloc_32.S b/arch/powerpc/kernel/reloc_32.S
new file mode 100644
index 000..045d61e
--- /dev/null
+++ b/arch/powerpc/kernel/reloc_32.S
@@ -0,0 +1,194 @@
+/*
+ * Code to process dynamic relocations for PPC32.
+ *
+ * Copyrights (C) IBM Corporation, 2011.
+ * Author: Suzuki Poulose suz...@in.ibm.com
+ *
+ *  - Based on ppc64 code - reloc_64.S
+ *
+ *  This program is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU General Public License
+ *  as published by the Free Software Foundation; either version
+ *  2 of the License, or (at your option) any later version.
+ */
+
+#include asm/ppc_asm.h
+
+/* Dynamic section table entry tags */
+DT_RELA = 7/* Tag for Elf32_Rela section */
+DT_RELASZ = 8  /* Size of the Rela relocs */
+DT_RELAENT = 9 /* Size of one Rela reloc entry */
+
+STN_UNDEF = 0  /* Undefined symbol index */
+STB_LOCAL = 0  /* Local binding for the symbol */
+
+R_PPC_ADDR16_LO = 4/* Lower half of (S+A) */
+R_PPC_ADDR16_HI = 5/* Upper half of (S+A) */
+R_PPC_ADDR16_HA = 6/* High Adjusted (S+A) */
+R_PPC_RELATIVE = 22
+
+/*
+ * r3 = desired final address
+ */
+
+_GLOBAL(relocate)
+
+   mflrr0
+   bl  0f  /* Find our current runtime address */
+0: mflrr12 /* Make it accessible */
+   mtlrr0
+
+   lwz r11, (p_dyn - 0b)(r12)
+   add r11, r11, r12   /* runtime address of .dynamic section */
+   lwz r9, (p_rela - 0b)(r12)
+   add r9, r9, r12 /* runtime address of .rela.dyn section */
+   lwz r10, (p_st - 0b)(r12)
+   add r10, r10, r12   /* runtime address of _stext section */
+   lwz r13, (p_sym - 0b)(r12)
+   add r13, r13, r12   /* runtime address of .dynsym section */
+
+   /*
+* Scan the dynamic section for RELA, RELASZ entries
+*/
+   li  r6, 0
+   li  r7, 0
+   li  r8, 0
+1: lwz r5, 0(r11)  /* ELF_Dyn.d_tag */
+   cmpwi   r5, 0   /* End of ELF_Dyn[] */
+   beq eodyn
+   cmpwi   r5, DT_RELA
+   bne relasz
+   lwz r7, 4(r11)  /* r7 = rela.link */
+   b   skip
+relasz:
+   cmpwi   r5, DT_RELASZ
+   bne relaent
+   lwz r8, 4(r11)  /* r8 = Total Rela relocs size */
+   b   skip
+relaent:
+   cmpwi   r5, DT_RELAENT
+   bne skip
+   lwz r6, 4(r11)  /* r6 = Size

[PATCH 2/3] [44x] Enable CONFIG_RELOCATABLE for PPC44x

2011-10-10 Thread Suzuki K. Poulose
The following patch adds relocatable support for PPC44x kernel.

We find the runtime address of _stext and relocate ourselves based
on the following calculation.

virtual_base = ALIGN(KERNELBASE,256M) +
MODULO(_stext.run,256M)

relocate() is called with the Effective Virtual Base Address (as
shown below)

| Phys. Addr| Virt. Addr |
Page (256M) ||
Boundary|   ||
|   ||
|   ||
Kernel Load |___|_ __ _ _ _ _|- Effective
Addr(_stext)|   |  ^ |Virt. Base Addr
|   |  | |
|   |  | |
|   |reloc_offset|
|   |  | |
|   |  | |
|   |__v_|-(KERNELBASE)%256M
|   ||
|   ||
|   ||
Page(256M)  |---||
Boundary|   ||


On BookE, we need __va()  __pa() early in the boot process to access
the device tree.

Currently this has been defined as :

#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) -
PHYSICAL_START + KERNELBASE)
where:
 PHYSICAL_START is kernstart_addr - a variable updated at runtime.
 KERNELBASE is the compile time Virtual base address of kernel.

This won't work for us, as kernstart_addr is dynamic and will yield different
results for __va()/__pa() for same mapping.

e.g.,

Let the kernel be loaded at 64MB and KERNELBASE be 0xc000 (same as
PAGE_OFFSET).

In this case, we would be mapping 0 to 0xc000, and kernstart_addr = 64M

Now __va(1MB) = (0x10) - (0x400) + 0xc000
= 0xbc10 , which is wrong.

it should be : 0xc000 + 0x10 = 0xc010

On PPC_47x (which is based on 44x), the kernel could be loaded at highmem.
Hence we cannot always depend on the compile time constants for mapping.

Here are the possible solutions:

1) Update kernstart_addr(PHSYICAL_START) to match the Physical address of
compile time KERNELBASE value, instead of the actual Physical_Address(_stext).

The disadvantage is that we may break other users of PHYSICAL_START. They
could be replaced with __pa(_stext).

2) Redefine __va()  __pa() with relocation offset


#if defined(CONFIG_RELOCATABLE)  defined(CONFIG_44x)
#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) - PHYSICAL_START + 
(KERNELBASE + RELOC_OFFSET)))
#define __pa(x) ((unsigned long)(x) + PHYSICAL_START - (KERNELBASE + 
RELOC_OFFSET))
#endif

where, RELOC_OFFSET could be

  a) A variable, say relocation_offset (like kernstart_addr), updated
 at boot time. This impacts performance, as we have to load an additional
 variable from memory.

OR

  b) #define RELOC_OFFSET ((PHYSICAL_START  PPC_PIN_SIZE_OFFSET_MASK) - \
  (KERNELBASE  PPC_PIN_SIZE_OFFSET_MASK))

   This introduces more calculations for doing the translation.

3) Redefine __va()  __pa() with a new variable

i.e,

#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) + VIRT_PHYS_OFFSET))

where VIRT_PHYS_OFFSET :

#ifdef CONFIG_44x
#define VIRT_PHYS_OFFSET virt_phys_offset
#else
#define VIRT_PHYS_OFFSET (KERNELBASE - PHYSICAL_START)
#endif /* 44x */

where virt_phy_offset is updated at runtime to :

Effective KERNELBASE - kernstart_addr.

Taking our example, above:

virt_phys_offset = effective_kernelstart_vaddr - kernstart_addr
 = 0xc040 - 0x40
 = 0xc000
and

__va(0x10) = 0xc000 + 0x10 = 0xc010
 which is what we want.

I have implemented (3) in the following patch which has same cost of
operation as the existing one.

I have tested the patches on 440x platforms only. However this should
work fine for PPC_47x also, as we only depend on the runtime address
and the current TLB XLAT entry for the startup code, which is available
in r25. I don't have access to a 47x board yet. So, it would be great if
somebody could test this on 47x.

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Cc: Paul Mackerras pau...@samba.org
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: Kumar Gala ga...@kernel.crashing.org
Cc: Tony Breeds t...@bakeyournoodle.com
Cc: Josh Boyer jwbo...@gmail.com
Cc: linuxppc-dev linuxppc-dev@lists.ozlabs.org
---

 arch/powerpc/Kconfig|2 -
 arch/powerpc/Makefile   |1 
 arch/powerpc/include/asm/page.h |   84 +-
 arch/powerpc/kernel/head_44x.S  |  111 ++-
 arch/powerpc/mm/init_32.c   |7 ++
 5 files changed, 187 insertions(+), 18 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 9eb2e60..99558d6 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc

[PATCH 3/3] [44x] Enable CRASH_DUMP for 440x

2011-10-10 Thread Suzuki K. Poulose
Now that we have relocatable kernel, supporting CRASH_DUMP only requires
turning the switches on for UP machines.

We don't have kexec support on 47x yet. Enabling SMP support would be done
as part of enabling the PPC_47x support.


Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Cc: Josh Boyer jwbo...@gmail.com
Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: linuxppc-dev linuxppc-dev@lists.ozlabs.org
---

 arch/powerpc/Kconfig |4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 99558d6..fc41ce5 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -362,8 +362,8 @@ config KEXEC
 
 config CRASH_DUMP
bool Build a kdump crash kernel
-   depends on PPC64 || 6xx || FSL_BOOKE
-   select RELOCATABLE if PPC64 || FSL_BOOKE
+   depends on PPC64 || 6xx || FSL_BOOKE || (44x  !SMP)
+   select RELOCATABLE if PPC64 || FSL_BOOKE || 44x
help
  Build a kernel suitable for use as a kdump capture kernel.
  The same kernel binary can be used as production kernel and dump

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[UPDATED PATCH v2] powerpc32: Kexec support for PPC440X chipsets

2011-07-18 Thread Suzuki K. Poulose
UPDATE: Minor update in Copyright assignment in misc_32.S
Added requirement of upstream kexec-tools.

Changes from v1: Uses a tmp mapping in the other address space to setup
 the 1:1 mapping (suggested by Sebastian Andrzej Siewior).

Note 1: Should we do the same for kernel entry code for PPC44x ?

This patch adds kexec support for PPC440 based chipsets.This work is based
on the KEXEC patches for FSL BookE.

The FSL BookE patch and the code flow could be found at the link below:

http://patchwork.ozlabs.org/patch/49359/

Steps:

1) Invalidate all the TLB entries except the one this code is run from
2) Create a tmp mapping for our code in the other address space and jump to it
3) Invalidate the entry we used
4) Create a 1:1 mapping for 0-2GiB in blocks of 256M
5) Jump to the new 1:1 mapping and invalidate the tmp mapping

I have tested this patches on Ebony, Sequoia boards and Virtex on QEMU.
It would be great if somebody could test this on the other boards.

You need the latest snapshot of kexec-tools for ppc440x support, available at

 git://git.kernel.org/pub/scm/utils/kernel/kexec/kexec-tools.git

Signed-off-by:  Suzuki Poulose suz...@in.ibm.com
Cc: Sebastian Andrzej Siewior bige...@linutronix.de
---

 arch/powerpc/Kconfig |2 
 arch/powerpc/include/asm/kexec.h |2 
 arch/powerpc/kernel/misc_32.S|  171 ++
 3 files changed, 173 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 423145a6..d04fae0 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -349,7 +349,7 @@ config ARCH_ENABLE_MEMORY_HOTREMOVE
 
 config KEXEC
bool kexec system call (EXPERIMENTAL)
-   depends on (PPC_BOOK3S || FSL_BOOKE)  EXPERIMENTAL
+   depends on (PPC_BOOK3S || FSL_BOOKE || (44x  !SMP  !47x))  
EXPERIMENTAL
help
  kexec is a system call that implements the ability to shutdown your
  current kernel, and to start another kernel.  It is like a reboot
diff --git a/arch/powerpc/include/asm/kexec.h b/arch/powerpc/include/asm/kexec.h
index 8a33698..f921eb1 100644
--- a/arch/powerpc/include/asm/kexec.h
+++ b/arch/powerpc/include/asm/kexec.h
@@ -2,7 +2,7 @@
 #define _ASM_POWERPC_KEXEC_H
 #ifdef __KERNEL__
 
-#ifdef CONFIG_FSL_BOOKE
+#if defined(CONFIG_FSL_BOOKE) || defined(CONFIG_44x)
 
 /*
  * On FSL-BookE we setup a 1:1 mapping which covers the first 2GiB of memory
diff --git a/arch/powerpc/kernel/misc_32.S b/arch/powerpc/kernel/misc_32.S
index 998a100..f7d760a 100644
--- a/arch/powerpc/kernel/misc_32.S
+++ b/arch/powerpc/kernel/misc_32.S
@@ -8,6 +8,8 @@
  * kexec bits:
  * Copyright (C) 2002-2003 Eric Biederman  ebied...@xmission.com
  * GameCube/ppc32 port Copyright (C) 2004 Albert Herranz
+ * PPC44x port. Copyright (C) 2011,  IBM Corporation
+ * Author: Suzuki Poulose suz...@in.ibm.com
  *
  * This program is free software; you can redistribute it and/or
  * modify it under the terms of the GNU General Public License
@@ -736,6 +738,175 @@ relocate_new_kernel:
mr  r5, r31
 
li  r0, 0
+#elif defined(CONFIG_44x)   !defined(CONFIG_47x)
+
+/*
+ * Code for setting up 1:1 mapping for PPC440x for KEXEC
+ *
+ * We cannot switch off the MMU on PPC44x.
+ * So we:
+ * 1) Invalidate all the mappings except the one we are running from.
+ * 2) Create a tmp mapping for our code in the other address space(TS) and
+ *jump to it. Invalidate the entry we started in.
+ * 3) Create a 1:1 mapping for 0-2GiB in chunks of 256M in original TS.
+ * 4) Jump to the 1:1 mapping in original TS.
+ * 5) Invalidate the tmp mapping.
+ *
+ * - Based on the kexec support code for FSL BookE
+ * - Doesn't support 47x yet.
+ *
+ */
+   /* Save our parameters */
+   mr  r29, r3
+   mr  r30, r4
+   mr  r31, r5
+
+   /* Load our MSR_IS and TID to MMUCR for TLB search */
+   mfspr   r3,SPRN_PID
+   mfmsr   r4
+   andi.   r4,r4,MSR_IS@l
+   beq wmmucr
+   orisr3,r3,PPC44x_MMUCR_STS@h
+wmmucr:
+   mtspr   SPRN_MMUCR,r3
+   sync
+
+   /*
+* Invalidate all the TLB entries except the current entry
+* where we are running from
+*/
+   bl  0f  /* Find our address */
+0: mflrr5  /* Make it accessible */
+   tlbsx   r23,0,r5/* Find entry we are in */
+   li  r4,0/* Start at TLB entry 0 */
+   li  r3,0/* Set PAGEID inval value */
+1: cmpwr23,r4  /* Is this our entry? */
+   beq skip/* If so, skip the inval */
+   tlbwe   r3,r4,PPC44x_TLB_PAGEID /* If not, inval the entry */
+skip:
+   addir4,r4,1 /* Increment */
+   cmpwi   r4,64   /* Are we done? */
+   bne 

[PATCH v2] powerpc32: Kexec support for PPC440X chipsets

2011-07-12 Thread Suzuki K. Poulose
Changes from V1: Uses a tmp mapping in the other address space to setup
 the 1:1 mapping (suggested by Sebastian Andrzej Siewior).

Note: Should we do the same for kernel entry code for PPC44x ?

This patch adds kexec support for PPC440 based chipsets.This work is based
on the KEXEC patches for FSL BookE.

The FSL BookE patch and the code flow could be found at the link below:

http://patchwork.ozlabs.org/patch/49359/

Steps:

1) Invalidate all the TLB entries except the one this code is run from
2) Create a tmp mapping for our code in the other address space and jump to it
3) Invalidate the entry we used
4) Create a 1:1 mapping for 0-2GiB in blocks of 256M
5) Jump to the new 1:1 mapping and invalidate the tmp mapping

I have tested this patches on Ebony, Sequoia boards and Virtex on QEMU.
It would be great if somebody could test this on the other boards.

Signed-off-by:  Suzuki Poulose suz...@in.ibm.com
Cc: Sebastian Andrzej Siewior bige...@linutronix.de
---

 arch/powerpc/Kconfig |2 
 arch/powerpc/include/asm/kexec.h |2 
 arch/powerpc/kernel/misc_32.S|  170 ++
 3 files changed, 172 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 423145a6..d04fae0 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -349,7 +349,7 @@ config ARCH_ENABLE_MEMORY_HOTREMOVE
 
 config KEXEC
bool kexec system call (EXPERIMENTAL)
-   depends on (PPC_BOOK3S || FSL_BOOKE)  EXPERIMENTAL
+   depends on (PPC_BOOK3S || FSL_BOOKE || (44x  !SMP  !47x))  
EXPERIMENTAL
help
  kexec is a system call that implements the ability to shutdown your
  current kernel, and to start another kernel.  It is like a reboot
diff --git a/arch/powerpc/include/asm/kexec.h b/arch/powerpc/include/asm/kexec.h
index 8a33698..f921eb1 100644
--- a/arch/powerpc/include/asm/kexec.h
+++ b/arch/powerpc/include/asm/kexec.h
@@ -2,7 +2,7 @@
 #define _ASM_POWERPC_KEXEC_H
 #ifdef __KERNEL__
 
-#ifdef CONFIG_FSL_BOOKE
+#if defined(CONFIG_FSL_BOOKE) || defined(CONFIG_44x)
 
 /*
  * On FSL-BookE we setup a 1:1 mapping which covers the first 2GiB of memory
diff --git a/arch/powerpc/kernel/misc_32.S b/arch/powerpc/kernel/misc_32.S
index 998a100..a7881ab 100644
--- a/arch/powerpc/kernel/misc_32.S
+++ b/arch/powerpc/kernel/misc_32.S
@@ -8,6 +8,7 @@
  * kexec bits:
  * Copyright (C) 2002-2003 Eric Biederman  ebied...@xmission.com
  * GameCube/ppc32 port Copyright (C) 2004 Albert Herranz
+ * PPC440x port  Copyright (C) 2011 Suzuki Poulose suz...@in.ibm.com
  *
  * This program is free software; you can redistribute it and/or
  * modify it under the terms of the GNU General Public License
@@ -736,6 +737,175 @@ relocate_new_kernel:
mr  r5, r31
 
li  r0, 0
+#elif defined(CONFIG_44x)   !defined(CONFIG_47x)
+
+/*
+ * Code for setting up 1:1 mapping for PPC440x for KEXEC
+ *
+ * We cannot switch off the MMU on PPC44x.
+ * So we:
+ * 1) Invalidate all the mappings except the one we are running from.
+ * 2) Create a tmp mapping for our code in the other address space(TS) and
+ *jump to it. Invalidate the entry we started in.
+ * 3) Create a 1:1 mapping for 0-2GiB in chunks of 256M in original TS.
+ * 4) Jump to the 1:1 mapping in original TS.
+ * 5) Invalidate the tmp mapping.
+ *
+ * - Based on the kexec support code for FSL BookE
+ * - Doesn't support 47x yet.
+ *
+ */
+   /* Save our parameters */
+   mr  r29, r3
+   mr  r30, r4
+   mr  r31, r5
+
+   /* Load our MSR_IS and TID to MMUCR for TLB search */
+   mfspr   r3,SPRN_PID
+   mfmsr   r4
+   andi.   r4,r4,MSR_IS@l
+   beq wmmucr
+   orisr3,r3,PPC44x_MMUCR_STS@h
+wmmucr:
+   mtspr   SPRN_MMUCR,r3
+   sync
+
+   /*
+* Invalidate all the TLB entries except the current entry
+* where we are running from
+*/
+   bl  0f  /* Find our address */
+0: mflrr5  /* Make it accessible */
+   tlbsx   r23,0,r5/* Find entry we are in */
+   li  r4,0/* Start at TLB entry 0 */
+   li  r3,0/* Set PAGEID inval value */
+1: cmpwr23,r4  /* Is this our entry? */
+   beq skip/* If so, skip the inval */
+   tlbwe   r3,r4,PPC44x_TLB_PAGEID /* If not, inval the entry */
+skip:
+   addir4,r4,1 /* Increment */
+   cmpwi   r4,64   /* Are we done? */
+   bne 1b  /* If not, repeat */
+   isync
+
+   /* Create a temp mapping and jump to it */
+   andi.   r6, r23, 1  /* Find the index to use */
+   addir24, r6, 1  /* r24 will contain 1 or 2 */
+
+   mfmsr   r9

[PATCH] [v3] kexec-tools: ppc32: Fixup ThreadPointer for purgatory code

2011-07-12 Thread Suzuki K. Poulose
PPC32 ELF ABI expects r2 to be loaded with Thread Pointer, which is 0x7000
bytes past the end of TCB. Though the purgatory is single threaded, it uses
TCB scratch space in vsnprintf(). This patch allocates a 1024byte TCB
and populates the TP with the address accordingly.

Changes from V2: Avoid address overflow in TP allocation.
Changes from V1: Fixed the addr calculation for uImage support.


Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Cc: Ryan S. Arnold r...@us.ibm.com
---

 kexec/arch/ppc/kexec-elf-ppc.c |   19 +++
 kexec/arch/ppc/kexec-uImage-ppc.c  |   17 +
 purgatory/arch/ppc/purgatory-ppc.c |2 +-
 purgatory/arch/ppc/v2wrap_32.S |4 
 4 files changed, 41 insertions(+), 1 deletions(-)

diff --git a/kexec/arch/ppc/kexec-elf-ppc.c b/kexec/arch/ppc/kexec-elf-ppc.c
index f4443b4..ecbbbeb 100644
--- a/kexec/arch/ppc/kexec-elf-ppc.c
+++ b/kexec/arch/ppc/kexec-elf-ppc.c
@@ -414,6 +414,25 @@ int elf_ppc_load(int argc, char **argv,const char 
*buf, off_t len,
elf_rel_set_symbol(info-rhdr, stack, addr, sizeof(addr));
 #undef PUL_STACK_SIZE
 
+   /*
+* Fixup ThreadPointer(r2) for purgatory.
+* PPC32 ELF ABI expects : 
+* ThreadPointer (TP) = TCB + 0x7000
+* We manually allocate a TCB space and set the TP
+* accordingly.
+*/
+#define TCB_SIZE 1024
+#define TCB_TP_OFFSET 0x7000   /* PPC32 ELF ABI */
+
+   addr = locate_hole(info, TCB_SIZE, 0, 0,
+   ((unsigned long)elf_max_addr(ehdr) - 
TCB_TP_OFFSET),
+   1);
+   addr += TCB_SIZE + TCB_TP_OFFSET;
+   elf_rel_set_symbol(info-rhdr, my_thread_ptr, addr, sizeof(addr));
+
+#undef TCB_SIZE
+#undef TCB_TP_OFFSET
+
addr = elf_rel_get_addr(info-rhdr, purgatory_start);
info-entry = (void *)addr;
 #endif
diff --git a/kexec/arch/ppc/kexec-uImage-ppc.c 
b/kexec/arch/ppc/kexec-uImage-ppc.c
index 1d71374..216c82d 100644
--- a/kexec/arch/ppc/kexec-uImage-ppc.c
+++ b/kexec/arch/ppc/kexec-uImage-ppc.c
@@ -228,6 +228,23 @@ static int ppc_load_bare_bits(int argc, char **argv, const 
char *buf,
/* No allocation past here in order not to overwrite the stack */
 #undef PUL_STACK_SIZE
 
+   /*
+* Fixup ThreadPointer(r2) for purgatory.
+* PPC32 ELF ABI expects : 
+* ThreadPointer (TP) = TCB + 0x7000
+* We manually allocate a TCB space and set the TP
+* accordingly.
+*/
+#define TCB_SIZE   1024
+#define TCB_TP_OFFSET  0x7000  /* PPC32 ELF ABI */
+   addr = locate_hole(info, TCB_SIZE, 0, 0,
+   ((unsigned long)-1 - TCB_TP_OFFSET),
+   1);
+   addr += TCB_SIZE + TCB_TP_OFFSET;
+   elf_rel_set_symbol(info-rhdr, my_thread_ptr, addr, sizeof(addr));
+#undef TCB_TP_OFFSET
+#undef TCB_SIZE
+
addr = elf_rel_get_addr(info-rhdr, purgatory_start);
info-entry = (void *)addr;
 
diff --git a/purgatory/arch/ppc/purgatory-ppc.c 
b/purgatory/arch/ppc/purgatory-ppc.c
index 349e750..3e6b354 100644
--- a/purgatory/arch/ppc/purgatory-ppc.c
+++ b/purgatory/arch/ppc/purgatory-ppc.c
@@ -26,7 +26,7 @@ unsigned int panic_kernel = 0;
 unsigned long backup_start = 0;
 unsigned long stack = 0;
 unsigned long dt_offset = 0;
-unsigned long my_toc = 0;
+unsigned long my_thread_ptr = 0;
 unsigned long kernel = 0;
 
 void setup_arch(void)
diff --git a/purgatory/arch/ppc/v2wrap_32.S b/purgatory/arch/ppc/v2wrap_32.S
index 8442d16..8b60677 100644
--- a/purgatory/arch/ppc/v2wrap_32.S
+++ b/purgatory/arch/ppc/v2wrap_32.S
@@ -56,6 +56,10 @@ master:
mr  17,3# save cpu id to r17
mr  15,4# save physical address in reg15
 
+   lis 6,my_thread_ptr@h
+   ori 6,6,my_thread_ptr@l
+   lwz 2,0(6)  # setup ThreadPointer(TP)
+
lis 6,stack@h
ori 6,6,stack@l
lwz 1,0(6)  #setup stack

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH v1] kexec-tools: ppc32: Fixup the ThreadPointer for purgatory code.

2011-07-11 Thread Suzuki K. Poulose
PPC32 ELF ABI expects r2 to be loaded with Thread Pointer, which is 0x7000
bytes past the end of TCB. Though the purgatory is single threaded, it uses
TCB scratch space in vsnprintf(). This patch allocates a 1024byte TCB
and populates the TP with the address accordingly.


Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Cc: Ryan S. Arnold r...@us.ibm.com
---

 kexec/arch/ppc/kexec-elf-ppc.c |9 +
 kexec/arch/ppc/kexec-uImage-ppc.c  |8 
 purgatory/arch/ppc/purgatory-ppc.c |2 +-
 purgatory/arch/ppc/v2wrap_32.S |4 
 4 files changed, 22 insertions(+), 1 deletions(-)

diff --git a/kexec/arch/ppc/kexec-elf-ppc.c b/kexec/arch/ppc/kexec-elf-ppc.c
index f4443b4..3a4b59b 100644
--- a/kexec/arch/ppc/kexec-elf-ppc.c
+++ b/kexec/arch/ppc/kexec-elf-ppc.c
@@ -414,6 +414,15 @@ int elf_ppc_load(int argc, char **argv,const char 
*buf, off_t len,
elf_rel_set_symbol(info-rhdr, stack, addr, sizeof(addr));
 #undef PUL_STACK_SIZE
 
+#define TCB_SIZE 1024
+#define TCB_TP_OFFSET 0x7000   /* PPC32 ELF ABI */
+
+   addr = locate_hole(info, TCB_SIZE, 0, 0, elf_max_addr(ehdr), 1);
+   addr += TCB_SIZE + TCB_TP_OFFSET;
+   elf_rel_set_symbol(info-rhdr, my_thread_ptr, addr, sizeof(addr));
+#undef TCB_SIZE
+#undef TCB_TP_OFFSET
+
addr = elf_rel_get_addr(info-rhdr, purgatory_start);
info-entry = (void *)addr;
 #endif
diff --git a/kexec/arch/ppc/kexec-uImage-ppc.c 
b/kexec/arch/ppc/kexec-uImage-ppc.c
index 1d71374..4c0adf6 100644
--- a/kexec/arch/ppc/kexec-uImage-ppc.c
+++ b/kexec/arch/ppc/kexec-uImage-ppc.c
@@ -228,6 +228,14 @@ static int ppc_load_bare_bits(int argc, char **argv, const 
char *buf,
/* No allocation past here in order not to overwrite the stack */
 #undef PUL_STACK_SIZE
 
+#define TCB_SIZE   1024
+#define TCB_TP_OFFSET  0x7000
+   addr = locate_hole(info, TCB_SIZE, 0, 0, -1, 1);
+   addr += TCB_TP_OFFSET;
+   elf_rel_set_symbol(info-rhdr, my_thread_ptr, addr, sizeof(addr));
+#undef TCB_TP_OFFSET
+#undef TCB_SIZE
+
addr = elf_rel_get_addr(info-rhdr, purgatory_start);
info-entry = (void *)addr;
 
diff --git a/purgatory/arch/ppc/purgatory-ppc.c 
b/purgatory/arch/ppc/purgatory-ppc.c
index 349e750..3e6b354 100644
--- a/purgatory/arch/ppc/purgatory-ppc.c
+++ b/purgatory/arch/ppc/purgatory-ppc.c
@@ -26,7 +26,7 @@ unsigned int panic_kernel = 0;
 unsigned long backup_start = 0;
 unsigned long stack = 0;
 unsigned long dt_offset = 0;
-unsigned long my_toc = 0;
+unsigned long my_thread_ptr = 0;
 unsigned long kernel = 0;
 
 void setup_arch(void)
diff --git a/purgatory/arch/ppc/v2wrap_32.S b/purgatory/arch/ppc/v2wrap_32.S
index 8442d16..8b60677 100644
--- a/purgatory/arch/ppc/v2wrap_32.S
+++ b/purgatory/arch/ppc/v2wrap_32.S
@@ -56,6 +56,10 @@ master:
mr  17,3# save cpu id to r17
mr  15,4# save physical address in reg15
 
+   lis 6,my_thread_ptr@h
+   ori 6,6,my_thread_ptr@l
+   lwz 2,0(6)  # setup ThreadPointer(TP)
+
lis 6,stack@h
ori 6,6,stack@l
lwz 1,0(6)  #setup stack

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH v2] kexec-tools: ppc32: Fixup the ThreadPointer for purgatory code.

2011-07-11 Thread Suzuki K. Poulose
PPC32 ELF ABI expects r2 to be loaded with Thread Pointer, which is 0x7000
bytes past the end of TCB. Though the purgatory is single threaded, it uses
TCB scratch space in vsnprintf(). This patch allocates a 1024byte TCB
and populates the TP with the address accordingly.

Changes from V1: Fixed the addr calculation for uImage support.


Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
Cc: Ryan S. Arnold r...@us.ibm.com
---

 kexec/arch/ppc/kexec-elf-ppc.c |9 +
 kexec/arch/ppc/kexec-uImage-ppc.c  |8 
 purgatory/arch/ppc/purgatory-ppc.c |2 +-
 purgatory/arch/ppc/v2wrap_32.S |4 
 4 files changed, 22 insertions(+), 1 deletions(-)

diff --git a/kexec/arch/ppc/kexec-elf-ppc.c b/kexec/arch/ppc/kexec-elf-ppc.c
index f4443b4..3a4b59b 100644
--- a/kexec/arch/ppc/kexec-elf-ppc.c
+++ b/kexec/arch/ppc/kexec-elf-ppc.c
@@ -414,6 +414,15 @@ int elf_ppc_load(int argc, char **argv,const char 
*buf, off_t len,
elf_rel_set_symbol(info-rhdr, stack, addr, sizeof(addr));
 #undef PUL_STACK_SIZE
 
+#define TCB_SIZE 1024
+#define TCB_TP_OFFSET 0x7000   /* PPC32 ELF ABI */
+
+   addr = locate_hole(info, TCB_SIZE, 0, 0, elf_max_addr(ehdr), 1);
+   addr += TCB_SIZE + TCB_TP_OFFSET;
+   elf_rel_set_symbol(info-rhdr, my_thread_ptr, addr, sizeof(addr));
+#undef TCB_SIZE
+#undef TCB_TP_OFFSET
+
addr = elf_rel_get_addr(info-rhdr, purgatory_start);
info-entry = (void *)addr;
 #endif
diff --git a/kexec/arch/ppc/kexec-uImage-ppc.c 
b/kexec/arch/ppc/kexec-uImage-ppc.c
index 1d71374..4923c83 100644
--- a/kexec/arch/ppc/kexec-uImage-ppc.c
+++ b/kexec/arch/ppc/kexec-uImage-ppc.c
@@ -228,6 +228,14 @@ static int ppc_load_bare_bits(int argc, char **argv, const 
char *buf,
/* No allocation past here in order not to overwrite the stack */
 #undef PUL_STACK_SIZE
 
+#define TCB_SIZE   1024
+#define TCB_TP_OFFSET  0x7000
+   addr = locate_hole(info, TCB_SIZE, 0, 0, -1, 1);
+   addr += TCB_SIZE + TCB_TP_OFFSET;
+   elf_rel_set_symbol(info-rhdr, my_thread_ptr, addr, sizeof(addr));
+#undef TCB_TP_OFFSET
+#undef TCB_SIZE
+
addr = elf_rel_get_addr(info-rhdr, purgatory_start);
info-entry = (void *)addr;
 
diff --git a/purgatory/arch/ppc/purgatory-ppc.c 
b/purgatory/arch/ppc/purgatory-ppc.c
index 349e750..3e6b354 100644
--- a/purgatory/arch/ppc/purgatory-ppc.c
+++ b/purgatory/arch/ppc/purgatory-ppc.c
@@ -26,7 +26,7 @@ unsigned int panic_kernel = 0;
 unsigned long backup_start = 0;
 unsigned long stack = 0;
 unsigned long dt_offset = 0;
-unsigned long my_toc = 0;
+unsigned long my_thread_ptr = 0;
 unsigned long kernel = 0;
 
 void setup_arch(void)
diff --git a/purgatory/arch/ppc/v2wrap_32.S b/purgatory/arch/ppc/v2wrap_32.S
index 8442d16..8b60677 100644
--- a/purgatory/arch/ppc/v2wrap_32.S
+++ b/purgatory/arch/ppc/v2wrap_32.S
@@ -56,6 +56,10 @@ master:
mr  17,3# save cpu id to r17
mr  15,4# save physical address in reg15
 
+   lis 6,my_thread_ptr@h
+   ori 6,6,my_thread_ptr@l
+   lwz 2,0(6)  # setup ThreadPointer(TP)
+
lis 6,stack@h
ori 6,6,stack@l
lwz 1,0(6)  #setup stack

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH v2] kexec-tools: powerpc: Use the #address-cells information to parsememory/reg

2011-06-16 Thread Suzuki K. Poulose
The format of memory/reg is based on the #address-cells,#size-cells. Currently,
the kexec-tools doesn't use the above values in parsing the memory/reg values.
Hence the kexec cannot handle cases where #address-cells, #size-cells are
different, (for e.g, PPC440X ).

This patch introduces a read_memory_region_limits(), which parses the
memory/reg contents based on the values of #address-cells and #size-cells.

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
---

 kexec/arch/ppc/crashdump-powerpc.c |   33 ++--
 kexec/arch/ppc/fs2dt.c |   14 ---
 kexec/arch/ppc/kexec-ppc.c |  158 ++--
 kexec/arch/ppc/kexec-ppc.h |6 +
 4 files changed, 129 insertions(+), 82 deletions(-)

diff --git a/kexec/arch/ppc/crashdump-powerpc.c 
b/kexec/arch/ppc/crashdump-powerpc.c
index 1dd6485..77a01e1 100644
--- a/kexec/arch/ppc/crashdump-powerpc.c
+++ b/kexec/arch/ppc/crashdump-powerpc.c
@@ -81,7 +81,7 @@ static int get_crash_memory_ranges(struct memory_range 
**range, int *ranges)
char fname[256];
char buf[MAXBYTES];
DIR *dir, *dmem;
-   FILE *file;
+   int fd;
struct dirent *dentry, *mentry;
int i, n, crash_rng_len = 0;
unsigned long long start, end, cstart, cend;
@@ -123,17 +123,16 @@ static int get_crash_memory_ranges(struct memory_range 
**range, int *ranges)
if (strcmp(mentry-d_name, reg))
continue;
strcat(fname, /reg);
-   file = fopen(fname, r);
-   if (!file) {
+   fd = open(fname, O_RDONLY);
+   if (fd  0) {
perror(fname);
closedir(dmem);
closedir(dir);
goto err;
}
-   n = fread(buf, 1, MAXBYTES, file);
-   if (n  0) {
-   perror(fname);
-   fclose(file);
+   n = read_memory_region_limits(fd, start, end);
+   if (n != 0) {
+   close(fd);
closedir(dmem);
closedir(dir);
goto err;
@@ -146,24 +145,6 @@ static int get_crash_memory_ranges(struct memory_range 
**range, int *ranges)
goto err;
}
 
-   /*
-* FIXME: This code fails on platforms that
-* have more than one memory range specified
-* in the device-tree's /memory/reg property.
-* or where the #address-cells and #size-cells
-* are not identical.
-*
-* We should interpret the /memory/reg property
-* based on the values of the #address-cells and
-* #size-cells properites.
-*/
-   if (n == (sizeof(unsigned long) * 2)) {
-   start = ((unsigned long *)buf)[0];
-   end = start + ((unsigned long *)buf)[1];
-   } else {
-   start = ((unsigned long long *)buf)[0];
-   end = start + ((unsigned long long *)buf)[1];
-   }
if (start == 0  end = (BACKUP_SRC_END + 1))
start = BACKUP_SRC_END + 1;
 
@@ -212,7 +193,7 @@ static int get_crash_memory_ranges(struct memory_range 
**range, int *ranges)
= RANGE_RAM;
memory_ranges++;
}
-   fclose(file);
+   close(fd);
}
closedir(dmem);
}
diff --git a/kexec/arch/ppc/fs2dt.c b/kexec/arch/ppc/fs2dt.c
index 238a3f2..733515a 100644
--- a/kexec/arch/ppc/fs2dt.c
+++ b/kexec/arch/ppc/fs2dt.c
@@ -137,21 +137,11 @@ static void add_usable_mem_property(int fd, int len)
if (strncmp(bname, /memory@, 8)  strcmp(bname, /memory))
return;
 
-   if (len  2 * sizeof(unsigned long))
-   die(unrecoverable error: not enough data for mem property\n);
-   len = 2 * sizeof(unsigned long);
-
if (lseek(fd, 0, SEEK_SET)  0)
die(unrecoverable error: error seeking in \%s\: %s\n,
pathname, strerror(errno));
-   if (read(fd, buf, len) != len)
-   die(unrecoverable error: error reading \%s\: %s\n,
-   pathname, strerror(errno));
-
-   if (~0ULL - buf[0]  buf[1])
-   die(unrecoverable error: mem property overflow\n);
-   base = buf[0];
-   end = base

[PATCH v2] kexec-tools: powerpc: Use the #address-cells information to parsememory/reg

2011-06-16 Thread Suzuki K. Poulose

The problem was with the mail client. Also, mistyped the To field in the 
previous
one. Resending.

ChangeLog from V1:
* Changed the interface for read_memory_region_limits to use 'int fd'
* Use sizeof(variable) for read(, instead of sizeof(type).

---
The format of memory/reg is based on the #address-cells,#size-cells. Currently,
the kexec-tools doesn't use the above values in parsing the memory/reg values.
Hence the kexec cannot handle cases where #address-cells, #size-cells are
different, (for e.g, PPC440X ).

This patch introduces a read_memory_region_limits(), which parses the
memory/reg contents based on the values of #address-cells and #size-cells.

Signed-off-by: Suzuki K. Poulose suz...@in.ibm.com
---

 kexec/arch/ppc/crashdump-powerpc.c |   33 ++--
 kexec/arch/ppc/fs2dt.c |   14 ---
 kexec/arch/ppc/kexec-ppc.c |  158 ++--
 kexec/arch/ppc/kexec-ppc.h |6 +
 4 files changed, 129 insertions(+), 82 deletions(-)

diff --git a/kexec/arch/ppc/crashdump-powerpc.c 
b/kexec/arch/ppc/crashdump-powerpc.c
index 1dd6485..77a01e1 100644
--- a/kexec/arch/ppc/crashdump-powerpc.c
+++ b/kexec/arch/ppc/crashdump-powerpc.c
@@ -81,7 +81,7 @@ static int get_crash_memory_ranges(struct memory_range 
**range, int *ranges)
char fname[256];
char buf[MAXBYTES];
DIR *dir, *dmem;
-   FILE *file;
+   int fd;
struct dirent *dentry, *mentry;
int i, n, crash_rng_len = 0;
unsigned long long start, end, cstart, cend;
@@ -123,17 +123,16 @@ static int get_crash_memory_ranges(struct memory_range 
**range, int *ranges)
if (strcmp(mentry-d_name, reg))
continue;
strcat(fname, /reg);
-   file = fopen(fname, r);
-   if (!file) {
+   fd = open(fname, O_RDONLY);
+   if (fd  0) {
perror(fname);
closedir(dmem);
closedir(dir);
goto err;
}
-   n = fread(buf, 1, MAXBYTES, file);
-   if (n  0) {
-   perror(fname);
-   fclose(file);
+   n = read_memory_region_limits(fd, start, end);
+   if (n != 0) {
+   close(fd);
closedir(dmem);
closedir(dir);
goto err;
@@ -146,24 +145,6 @@ static int get_crash_memory_ranges(struct memory_range 
**range, int *ranges)
goto err;
}
 
-   /*
-* FIXME: This code fails on platforms that
-* have more than one memory range specified
-* in the device-tree's /memory/reg property.
-* or where the #address-cells and #size-cells
-* are not identical.
-*
-* We should interpret the /memory/reg property
-* based on the values of the #address-cells and
-* #size-cells properites.
-*/
-   if (n == (sizeof(unsigned long) * 2)) {
-   start = ((unsigned long *)buf)[0];
-   end = start + ((unsigned long *)buf)[1];
-   } else {
-   start = ((unsigned long long *)buf)[0];
-   end = start + ((unsigned long long *)buf)[1];
-   }
if (start == 0  end = (BACKUP_SRC_END + 1))
start = BACKUP_SRC_END + 1;
 
@@ -212,7 +193,7 @@ static int get_crash_memory_ranges(struct memory_range 
**range, int *ranges)
= RANGE_RAM;
memory_ranges++;
}
-   fclose(file);
+   close(fd);
}
closedir(dmem);
}
diff --git a/kexec/arch/ppc/fs2dt.c b/kexec/arch/ppc/fs2dt.c
index 238a3f2..733515a 100644
--- a/kexec/arch/ppc/fs2dt.c
+++ b/kexec/arch/ppc/fs2dt.c
@@ -137,21 +137,11 @@ static void add_usable_mem_property(int fd, int len)
if (strncmp(bname, /memory@, 8)  strcmp(bname, /memory))
return;
 
-   if (len  2 * sizeof(unsigned long))
-   die(unrecoverable error: not enough data for mem property\n);
-   len = 2 * sizeof(unsigned long);
-
if (lseek(fd, 0, SEEK_SET)  0)
die(unrecoverable error: error seeking in \%s\: %s\n,
pathname, strerror(errno));
-   if (read(fd, buf, len