yongfeng-nv commented on a change in pull request #5382:
URL: https://github.com/apache/incubator-tvm/pull/5382#discussion_r420255081
##########
File path: src/te/operation/op_util.cc
##########
@@ -164,9 +164,21 @@ MakeLoopNest(const Stage& stage,
value_map[iv] = dom->min;
} else {
runtime::ThreadScope ts =
runtime::ThreadScope::make(bind_iv->thread_tag);
- if (stage->scope == "" || stage->scope == "warp" ||
+ if (stage->scope == "" ||
static_cast<int>(runtime::StorageScope::make(stage->scope).rank)
<= ts.rank) {
value_map[iv] = var;
+ } else if (stage->scope == "warp" && ts.rank == 1) {
+ // To determine whether a thread index is inside or outside a warp,
we need
+ // to know the thread extent. We leave a warning for now.
+ if (ts.dim_index == 0) {
+ value_map[iv] = var;
+ } else {
Review comment:
Thanks for the elaboration.
I added line 167/168 a few weeks ago. I am glad that this PR is fixing for
the "warp" scope, as I wasn't very sure about it and left it with the old
behavior to avoid one test failure. Your explanation is almost same as what I
thought: when a StorageScope is only visible to a ThreadScope (such as shared
to block or local to thread), we don't access it with thread IterVars (such as
blockIdx or threadIdx), i.e. that they are not account to the offset.
If my idea makes sense, I am confused by this statement
> but those outside a warp do not
and the second situation. In my mind, the conceptual "warp" storage is
accessible by multiple threads, it makes sense to keep threadIdx IterVar to
offset the access. But I don't understand why things change when it comes to
outside a warp. In the 2nd situation, is cross "warp" access allowed?
In the 1st situation, does threadIdx.x's extent matter here or is it handled
by warp memory lowering?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]