Re: [PATCH 30/78] ARM: aarch64: Add relocation support

2018-03-20 Thread Andrey Smirnov
On Mon, Mar 19, 2018 at 1:50 AM, Sascha Hauer  wrote:
> Hi Andrey,
>
> On Fri, Mar 16, 2018 at 02:51:19PM -0700, Andrey Smirnov wrote:
>> On Fri, Mar 16, 2018 at 5:53 AM, Sascha Hauer  wrote:
>> > This adds aarch64 support for relocating binaries linked with -pie.
>> >
>> > Support is integrated into the already exisiting
>> > relocate_to_current_adr() function which is now used for both arm32
>> > and aarch64.
>> >
>> > Signed-off-by: Sascha Hauer 
>>
>>
>> Sascha:
>>
>> Two small suggestions w.r.t. this patch:
>>
>>  - I'd consider changing the code of relocate_to_current_adr() such
>> that AARCH64 specific codepaths are not taken on ARM32 (via IS_ENABLED
>> check or something similar)
>
> Why? Do you want to make the code more clear or do you fear that we
> apply fixups for a foreign architecture?
>

You'd be able to get the value of *_R_TYPE(type) once and then use
switch to structure code handling various types, which might be a bit
more readable. But that 50% personal preference, so feel free to keep
the code as is.

>>
>>  - I've always wanted to fix the original code to use Elf32_rel type
>> instead of magic hard-coded offsets, so depending on your
>> willingness/time-budget, maybe now would be a good time to do that as
>> well as use Elf64_rela for AARCH64?
>
> You mean using
>
>> struct elf32_rel {
>>   Elf32_Addrr_offset;
>>   Elf32_Wordr_info;
>> } Elf32_Rel;
>
> and:
>
>> struct elf64_rela {
>>   Elf64_Addr r_offset;  /* Location at which to apply the action */
>>   Elf64_Xword r_info;   /* index and type of relocation */
>>   Elf64_Sxword r_addend;/* Constant addend used to compute value 
>> */
>> } Elf64_Rela;
>
> ?

Yep, those are the two types.

>
> Yes, I can change that. I wonder though which type is behind R_ARM_ABS32
>

I think its still would be Elf32_Rel. From what I can tell both
R_ARM_ABS32 and R_ARM_RELATIVE calculate "*fixup = *fixup + offset"
using same pointer offsets it's just former also adds "r" that is
derived from "r_info". With enough #ifdefing it might even be possible
to convert that while() to a for() loop.

Thanks,
Andrey Smirnov

___
barebox mailing list
barebox@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/barebox


Re: [PATCH 30/78] ARM: aarch64: Add relocation support

2018-03-19 Thread Sascha Hauer
Hi Andrey,

On Fri, Mar 16, 2018 at 02:51:19PM -0700, Andrey Smirnov wrote:
> On Fri, Mar 16, 2018 at 5:53 AM, Sascha Hauer  wrote:
> > This adds aarch64 support for relocating binaries linked with -pie.
> >
> > Support is integrated into the already exisiting
> > relocate_to_current_adr() function which is now used for both arm32
> > and aarch64.
> >
> > Signed-off-by: Sascha Hauer 
> 
> 
> Sascha:
> 
> Two small suggestions w.r.t. this patch:
> 
>  - I'd consider changing the code of relocate_to_current_adr() such
> that AARCH64 specific codepaths are not taken on ARM32 (via IS_ENABLED
> check or something similar)

Why? Do you want to make the code more clear or do you fear that we
apply fixups for a foreign architecture?

> 
>  - I've always wanted to fix the original code to use Elf32_rel type
> instead of magic hard-coded offsets, so depending on your
> willingness/time-budget, maybe now would be a good time to do that as
> well as use Elf64_rela for AARCH64?

You mean using 

> struct elf32_rel {
>   Elf32_Addrr_offset;
>   Elf32_Wordr_info;
> } Elf32_Rel;

and:

> struct elf64_rela {
>   Elf64_Addr r_offset;  /* Location at which to apply the action */
>   Elf64_Xword r_info;   /* index and type of relocation */
>   Elf64_Sxword r_addend;/* Constant addend used to compute value 
> */
> } Elf64_Rela;

?

Yes, I can change that. I wonder though which type is behind R_ARM_ABS32

Sascha

-- 
Pengutronix e.K.   | |
Industrial Linux Solutions | http://www.pengutronix.de/  |
Peiner Str. 6-8, 31137 Hildesheim, Germany | Phone: +49-5121-206917-0|
Amtsgericht Hildesheim, HRA 2686   | Fax:   +49-5121-206917- |

___
barebox mailing list
barebox@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/barebox


Re: [PATCH 30/78] ARM: aarch64: Add relocation support

2018-03-16 Thread Andrey Smirnov
On Fri, Mar 16, 2018 at 5:53 AM, Sascha Hauer  wrote:
> This adds aarch64 support for relocating binaries linked with -pie.
>
> Support is integrated into the already exisiting
> relocate_to_current_adr() function which is now used for both arm32
> and aarch64.
>
> Signed-off-by: Sascha Hauer 


Sascha:

Two small suggestions w.r.t. this patch:

 - I'd consider changing the code of relocate_to_current_adr() such
that AARCH64 specific codepaths are not taken on ARM32 (via IS_ENABLED
check or something similar)

 - I've always wanted to fix the original code to use Elf32_rel type
instead of magic hard-coded offsets, so depending on your
willingness/time-budget, maybe now would be a good time to do that as
well as use Elf64_rela for AARCH64?

Thanks,
Andrey Smirnov

> ---
>  arch/arm/cpu/common.c| 38 ---
>  arch/arm/cpu/setupc_64.S | 58 
> 
>  common/Kconfig   |  2 +-
>  3 files changed, 89 insertions(+), 9 deletions(-)
>
> diff --git a/arch/arm/cpu/common.c b/arch/arm/cpu/common.c
> index 3766116d97..c317e502d0 100644
> --- a/arch/arm/cpu/common.c
> +++ b/arch/arm/cpu/common.c
> @@ -24,39 +24,61 @@
>  #include 
>  #include 
>  #include 
> +#include 
> +
> +#define R_ARM_RELATIVE 23
> +#define R_AARCH64_RELATIVE 1027
>
>  /*
>   * relocate binary to the currently running address
>   */
>  void relocate_to_current_adr(void)
>  {
> -   unsigned long offset;
> +   unsigned long offset, offset_var;
> unsigned long *dstart, *dend, *dynsym, *dynend;
>
> /* Get offset between linked address and runtime address */
> offset = get_runtime_offset();
> +   offset_var = global_variable_offset();
>
> -   dstart = (void *)__rel_dyn_start + offset;
> -   dend = (void *)__rel_dyn_end + offset;
> +   dstart = (void *)__rel_dyn_start + offset_var;
> +   dend = (void *)__rel_dyn_end + offset_var;
>
> -   dynsym = (void *)__dynsym_start + offset;
> -   dynend = (void *)__dynsym_end + offset;
> +   dynsym = (void *)__dynsym_start + offset_var;
> +   dynend = (void *)__dynsym_end + offset_var;
>
> while (dstart < dend) {
> unsigned long *fixup = (unsigned long *)(*dstart + offset);
> unsigned long type = *(dstart + 1);
> +   int add;
> +
> +   if (ELF64_R_TYPE(type) == R_AARCH64_RELATIVE) {
> +   unsigned long addend = *(dstart + 2);
>
> -   if ((type & 0xff) == 0x17) {
> +   *fixup = addend + offset;
> +
> +   add = 3;
> +   } else if (ELF32_R_TYPE(type) == R_ARM_RELATIVE) {
> *fixup = *fixup + offset;
> -   } else {
> +
> +   add = 2;
> +   } else if (ELF32_R_TYPE(type) == R_ARM_ABS32) {
> int index = type >> 8;
> unsigned long r = dynsym[index * 4 + 1];
>
> *fixup = *fixup + r + offset;
> +
> +   add = 2;
> +   } else {
> +   putc_ll('>');
> +   puthex_ll(type);
> +   putc_ll('\n');
> +   /* We're doomed */
> +   panic(NULL);
> }
>
> *dstart += offset;
> -   dstart += 2;
> +   dstart += add;
> }
>
> memset(dynsym, 0, (unsigned long)dynend - (unsigned long)dynsym);
> diff --git a/arch/arm/cpu/setupc_64.S b/arch/arm/cpu/setupc_64.S
> index 3515854784..88c7899205 100644
> --- a/arch/arm/cpu/setupc_64.S
> +++ b/arch/arm/cpu/setupc_64.S
> @@ -16,3 +16,61 @@ ENTRY(setup_c)
> mov x30, x15
> ret
>  ENDPROC(setup_c)
> +
> +/*
> + * void relocate_to_adr(unsigned long targetadr)
> + *
> + * Copy binary to targetadr, relocate code and continue
> + * executing at new address.
> + */
> +.section .text.relocate_to_adr
> +ENTRY(relocate_to_adr)
> +   /* x0: target address */
> +
> +   stp x19, x20, [sp, #-16]!
> +
> +   mov x19, lr
> +
> +   mov x6, x0
> +
> +   bl  get_runtime_offset
> +   mov x5, x0
> +
> +   ldr x0, =_text
> +   mov x8, x0
> +
> +   add x1, x0, x5  /* x1: from address */
> +
> +   cmp x1, x6  /* already at correct address? */
> +   beq 1f  /* yes, skip copy to new address */
> +
> +   ldr x2, =__bss_start
> +
> +   sub x2, x2, x0  /* x2: size */
> +   mov x0, x6  /* x0: target */
> +
> +   /* adjust return address */
> +   sub x19, x19, x1/* sub address where we are actually 
> running */
> +   add x19, x19, x0/* add address where we are going to 
> run */
> +
> +