Lunderberg commented on PR #12652: URL: https://github.com/apache/tvm/pull/12652#issuecomment-1233043992
> Since now we have decl_buffer node, if we also convert T.alloc_buffer to T.decl_buffer + T.allocate in LowerOpaqueBlock, could the axis_separators get preserved? (I am not so familiar with it, but I notice that it is in the Buffer object's field). Maybe then the order of the two passes could get totally free. Our team rely on the IR form which is block-free but multi-dimensional buffer accessing to perform certain analysis and rewriting, thus prefer to lower block before flatten in a customized configuration. I like that idea, and having independent order of passes would be better overall. There was some similar logic in `StorageFlatten` that would look for a BufferLoad/BufferStore in order to know the appropriate axes to flatten, but the `DeclBuffer` usage would be even cleaner. > Do we have some protection case for the axis_separators feature? In principle, the `tests/python/contrib/test_hexagon/test_2d_physical_buffers.py::TestElementWise::test_cache_shape` should have caught it, as it validates the shape of a buffer after lowering. Looking at it again, it uses a TE-based schedule, so it goes through `StorageFlatten` instead of `LowerOpaqueBlock`/`FlattenBuffer`. Adding a test that validates the buffer shape after running through all of `tvm.lower`, and will be expanding the hexagon-focused PRs in a follow-up. (Side-note: If `StorageFlatten` were to replace `BufferRealize` with `Allocate` as it currently does, but add a `DeclBuffer` instead of performing the flattening itself, then the same `FlattenBuffer` pass could apply to both types of schedules. That would be another argument in favor of having `FlattenBuffer` read from `DeclBuffer`, to minimize duplication.) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
