sbc100 added a subscriber: ngzhian.
sbc100 added a comment.

In D151820#4385568 <https://reviews.llvm.org/D151820#4385568>, @sbc100 wrote:

> In D151820#4385536 <https://reviews.llvm.org/D151820#4385536>, @dschuff wrote:
>
>>> I don't think it will since `__BIGGEST_ALIGNMENT__ >= 
>>> XNN_ALLOCATION_ALIGNMENT` will remain true after this change.. so this 
>>> change should have no effect on that code.
>>
>> I meant that when `__BIGGEST_ALIGNMENT__ >= XNN_ALLOCATION_ALIGNMENT` (which 
>> was true before and will remain true), then XNNPack uses 
>> `__builtin_alloca()` as the implementation of `XNN_SIMD_ALLOCA` (which 
>> presumably is for allocating SIMD values). This change will reduce the 
>> alignment used by `__builtin_alloca()` from 16 to 8, such that (I think) it 
>> is no longer suitable for SIMD values.
>>
>> Maybe this is a bug in XNNPack (they should maybe be using 
>> XNN_ALLOCATION_ALIGNMENT with a value suitable for SIMD?) but given that 
>> BIGGEST_ALIGNMENT and alloca seem to be intended for any base type 
>> (including SIMD) it wouldn't be surprising if someone else were depending on 
>> this too.
>
> XNN_ALLOCATION_ALIGNMENT is 8 under webassembly, which apparently the 
> alignment than xnnpack wants for webassemebly.  Using alloca for this is find 
> both before and after this change since both 8 and 18 as fit this requirement.
>
>> which... maybe this is just re-litigating the previous discussion, I don't 
>> know. I wonder at what point our ABI should be treating SIMD values as 
>> "normal" rather than rare.

If xnnpack wanted more than 8 byte alignment it should surely set 
XNN_ALLOCATION_ALIGNMENT to greater than 8?

Adding @tlively and @ngzhian in case that have some more background on this..


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D151820/new/

https://reviews.llvm.org/D151820

_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to