tqchen edited a comment on pull request #18:
URL: https://github.com/apache/tvm-rfcs/pull/18#issuecomment-893549153


   Thank you @MeeraN7 for the RFC. SVE is certainly an interesting topic.
   
   Because we do not yet have SVE support in TIR. It would be useful to think 
carefully about how SVE can be presented and transformed.  Right now the 
proposal contains one way to do so. However, we could use a bit of more context 
information, to see how the proposed design impacts the general lowering flow.
   
   Specifically, it would be very helpful to also show examples of the code 
along the transformations, so we can have a better understanding the possible 
design tradeoffs. It might be helpful to translate some of the examples in the 
[whitepaper](https://developer.arm.com/solutions/hpc/resources/hpc-white-papers/arm-scalable-vector-extensions-and-application-to-machine-learning)
 to tvmscript form.
   
   Specifically:
   
   - The TIR before SVE vectorization
   - The TIR after SVE vectorization right before LLVM codegen
   - The corresponding lowered llvm intrinsics
   
   To touch a bit on the design alternatives(disclaimer, I only learnt the 
VLA/SVE by quickly reading through the manual, so I could be wrong). Based on 
my understanding, the main goal of VLA is to use intrisnics to represent a some 
what restricted loop pattern(in terms of possible items we can do). While the 
previous fixed length vectorization pushes the items onto the vector constructs 
such as Ramp and Broadcast. 
   
   I wonder if we could take a different approach for VLA/SVE. Because VLA in 
nature is closer to the loop pattern. Specifically, I feel we could come up 
with some form of "loop SVE legalization" that legalize a loop's body to all 
the patterns that SVE support, then leave the for loop as it is with annotation 
VLA. Then the code generator can take that and generate VLA code.
   
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to