yongfeng-nv commented on a change in pull request #5367:
URL: https://github.com/apache/incubator-tvm/pull/5367#discussion_r414946295
##########
File path: src/te/operation/compute_op.cc
##########
@@ -231,20 +231,18 @@ void ComputeOpNode::PropBoundToInputs(
// undefined behaviour), so we can intersect the estimated set of
the argument with the
// range expected by the tensor. However, intersection may result in
overly complex
// expressions, so we perform a more relaxed form of intersection.
- IntSet arg_intset = EvalSet(call->args[i], dom_map);
+ IntSet arg_intset = analyzer->int_set(call->args[i],
ConvertDomMap(dom_map));
const arith::IntervalSetNode* arg_interval =
arg_intset.as<arith::IntervalSetNode>();
if (arg_interval) {
PrimExpr shape_i_min_value = make_zero(t->shape[i].dtype());
PrimExpr shape_i_max_value = t->shape[i] - 1;
PrimExpr min_value = arg_interval->min_value;
PrimExpr max_value = arg_interval->max_value;
// Prefer the shape bounds only when we can prove they are tighter.
- if (arith::is_neg_inf(min_value) ||
- analyzer->CanProve(shape_i_min_value >= min_value)) {
+ if ((arith::is_pos_inf(max_value) && arith::is_neg_inf(min_value))
||
+ (analyzer->CanProve(shape_i_min_value >= min_value) &&
+ analyzer->CanProve(shape_i_max_value <= max_value))) {
Review comment:
`tests/python/unittest/test_target_codegen_blob.py` exposes a case,
where `shape_i` is `[0, 0]` and arg_interval is `[threadIdx.y,
threadIdx.y]`,where `threadIdx.y`'s range is `[0, 7]`. Either `[0, 0]` or
`[threadIdx.y, threadIdx.y]` is fine. The current logic ends with
`[threadIdx.y, 0]`, messing up further transforms and triggering an assertion
later. We'd better to update both ends at the same time, i.e. treating
IntervalSet as an atomic object. Adding this explanation as comment in the
code.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]