vinx13 commented on pull request #39: URL: https://github.com/apache/tvm-rfcs/pull/39#issuecomment-936668384
One way to represent the layout mapping in TIR is to introduce different storage scopes and have a registry of pre-defined layout mapping (for example, we already did similar thing for [`wmma` fragments](https://github.com/apache/tvm/blob/813136401a11a49d6c15e6013c34dd822a5c4ff6/python/tvm/topi/cuda/tensor_intrin.py#L94-L101), a special data structure for tensor core input). The cons is that the TIR itself doesn't contain the mapping, the meaning of TIR depends on the contents in the registry. If we can have a few pre-defined mapping that is general to be used in different operators, this might be fine. A possible way to embed the mapping inside TIR is to build the `PrimExpr` from the mapping function, which is similar to what we did for [te.compute](https://github.com/apache/tvm/blob/main/python/tvm/te/operation.py#L106) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
