Hi Richard,

> If global_char really is a char then isn't that UB?  

No why? We can do all kinds of arithmetic based on pointers, either using
pointer types or converted to uintptr_t. Note that the optimizer actually 
creates
these expressions, for example arr[N-x] can be evaluated as (&arr[0] + N) - x.
So this remains legal even if N is well outside the bounds of the array.

> I guess we still have to compile it without error though...

*THIS* a million times... Any non-trivial application relies on UB behaviour 
with
100% certainty. But we still have to compile it correctly!

>>     To avoid this, limit the offset to +/-1MB so that the symbol needs to be 
>> within a
>>     3.9GB offset from its references.  For the tiny code model use a 64KB 
>> offset, allowing
>>     most of the 1MB range for code/data between the symbol and its 
>> references.
>
> These new values seem a bit magical.  Would it work for the original
> testcase to use FORCE_TO_MEM if !offset_within_block_p (x, offset)?

No - the testcases fail with that. It also reduces codequality by not allowing
commonly used offsets as part of the symbol relocation.

So how would you want to make the offsets less magical? There isn't a clear 
limitation
like for MOVW/MOVT relocations (which are limited to +-32KB), so it just is an 
arbitrary
decision which allows this optimization for all frequently used offsets but 
avoids relocation
failures caused by large offsets. The values I used were chosen so that 
codequality isn't
affected, but it is practically impossible to trigger relocation errors.

And they can always be changed again if required - this is not meant to be the 
final
AArch64 patch ever!

Cheers,
Wilco 

Reply via email to