The GitHub Actions job "Lint" on 
tvm.git/refactor-and-introduce-allocbuffer-and-phase-out-allocatenode has 
failed.
Run started by GitHub user tqchen (triggered by tqchen).

Head commit for run:
4d313b3eed4253c062b4a5c983c314fa42636a4b / tqchen <[email protected]>
[TIR][FIX] Fix LeafBlockRemovalPlan to peel AllocBuffer nodes in schedule 
primitives

When a PrimFunc uses `T.alloc_buffer(...)` (which generates an `AllocBuffer`
statement node wrapping the root block body), `LeafBlockRemovalPlan` in
`src/s_tir/schedule/transform.cc` would fail to find the `SeqStmt` inside
the root block body. This caused `reverse_compute_at` (and `compute_at`) to
raise a spurious "Block is the only leaf in the scope" `ScheduleError`.

Root cause: `LeafBlockRemovalPlan` correctly peeled `DeclBuffer` nodes from
the block body before looking for a `SeqStmt`, but did not handle
`AllocBuffer` nodes the same way. Since `MergeNest` (in `ir_utils.cc`)
already supported both node types for re-attachment, the fix is symmetric.

Fix: extend the while loop in `LeafBlockRemovalPlan` to also peel `AllocBuffer`
nodes (saving them in the `allocs` vector) so that the nested `SeqStmt` is
reachable. `MergeNest` then re-attaches both `DeclBuffer` and `AllocBuffer`
wrappers around the modified `SeqStmt`.

Add a regression test in
`tests/python/s_tir/schedule/test_schedule_reverse_compute_at_regression.py`
with a matmul+relu pattern using `T.alloc_buffer` to prevent future regressions.

Report URL: https://github.com/apache/tvm/actions/runs/22625085312

With regards,
GitHub Actions via GitBox


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to