mbrookhart commented on pull request #7910:
URL: https://github.com/apache/tvm/pull/7910#issuecomment-825042408


   I find it interesting that if I freeze the weights, I still fail to compile 
the all_class_non_max_suppression function:
   ```
   ---------------------------------------------------------------
   An internal invariant was violated during the execution of TVM.
   Please read TVM's error reporting guidelines.
   More details can be found here: 
https://discuss.tvm.ai/t/error-reporting/7793.
   ---------------------------------------------------------------
   
     Check failed: arg.dtype() == value.dtype() (int32 vs. int64) : 
   Error during compile function
   -----------------------------
   #[version = "0.0.5"]
   fn (%p0: Tensor[(1, 1344, 4), float32], %p1: Tensor[(1, 1, 1344), float32], 
%p2: int64, %p3: float32, %p4: float32, Primitive=1) -> (Tensor[(1344, 3), 
int64], Tensor[(1), int64]) {
     vision.all_class_non_max_suppression(%p0, %p1, %p2, %p3, %p4, 
meta[relay.attrs.AllClassNonMaximumSuppressionAttrs][0]) /* ty=(Tensor[(1344, 
3), int64], Tensor[(1), int64]) */
   }
   ```
   
   I'm not sure why that would be, probably something isn't constant folding 
correctly.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to