lazycal opened a new pull request #10172:
URL: https://github.com/apache/tvm/pull/10172


   Thanks for contributing to TVM!   Please refer to guideline 
https://tvm.apache.org/docs/contribute/ for useful information and tips. After 
the pull request is submitted, please request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @ them in the pull request thread.
   
   The following model
   ```python
   import tvm
   from tvm import relay
   import numpy as np
   
   xshape = (1, 1, 1)
   inp = np.random.uniform(size=xshape).astype(np.int64)
   
   x = relay.var("x", shape=xshape, dtype='int64')
   x = relay.cast(x, 'int64')
   x = relay.broadcast_to(x, relay.const([1, 2, 2], dtype='int64'))
   func = relay.Function(relay.analysis.free_vars(x), -x)
   mod = tvm.IRModule.from_expr(func)
   
   with tvm.transform.PassContext(opt_level=0):
       relay.create_executor("debug", mod, tvm.cpu()).evaluate()(inp)
   ```
   triggers two issues regarding `base` and `stride` dtype mismatch in `Ramp`, 
one in VectorizeLoop Pass and the other in NarrowDataType Pass. To be specific:
   - During VectorizeLoop, a loop variable will be converted to a ramp but 
always of dtype `int32`. This PR changes it to use the loop variable's dtype. 
   - During NarrowDataType, it can happen that the `stride `is inferred with 
`int32`, but `base` is not (see the added test case for detail). This PR adds 
an upcasting when rewriting a `Ramp` node that has `base` and `stride` inferred 
with different number of bits.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to