haoyang9804 commented on issue #15282:
URL: https://github.com/apache/tvm/issues/15282#issuecomment-1708161759

   Here is my analysis after carefully reading the source code.
   Function `InstanceNormRel` in `relay/op/nn/nn.cc` defines `instance_norm`'s 
type constraints.
   In `InstanceNormRel`, there are several type reporter assignment statements:
   ```Python
   reporter->Assign(types[1], TensorType({data->shape[axis]}, data->dtype));
   reporter->Assign(types[2], TensorType({data->shape[axis]}, data->dtype));
   reporter->Assign(types[3], TensorType(data->shape, data->dtype));
   ```
   , which calls `Assign` function. `Assign` further calls `TypeSolver::Unify` 
to unify two types shown in the function parameter lists, e.g., `types[2]` and 
`TensorType({data->shape[axis]}, data->dtype)`.
   If the `input_shape` is [1,2,1,2] (channel size is 2 instead of 1), then 
your code will pass the compilation. In this case, two types that are ready to 
be unified are both `Tensor[(2), float64]`. Obviously, the compiler can unify 
them.
   But when the `input_shape` is [1,1,2,2] (channel size is 1), the two types 
are `Tensor[(1), float64]` and  `float64` respectively. Seems that TVM cannot 
unify `[float64]` with `float64`. Or TVM mis-analyzes `data->dtype`. 
   
   No matter what, I believe this is a TVM bug in type resolution.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to