comaniac opened a new pull request #9545:
URL: https://github.com/apache/tvm/pull/9545


   This is a small patch to remove duplicated logging messages in Relay 
frontends. For example, without this PR, we will see the following messages 
from the PyTorch frontend:
   
   ```
   WARNING:root:Untyped Tensor found, assume it is float32
   WARNING:root:Untyped Tensor found, assume it is float32
   WARNING:root:Untyped Tensor found, assume it is float32
   WARNING:root:Untyped Tensor found, assume it is float32
   WARNING:root:Untyped Tensor found, assume it is float32
   WARNING:root:Untyped Tensor found, assume it is float32
   WARNING:root:Untyped Tensor found, assume it is float32
   WARNING:root:Untyped Tensor found, assume it is float32
   WARNING:root:Untyped Tensor found, assume it is float32
   WARNING:root:Untyped Tensor found, assume it is float32
   WARNING:root:Untyped Tensor found, assume it is float32
   WARNING:root:Untyped Tensor found, assume it is float32
   WARNING:root:Untyped Tensor found, assume it is float32
   WARNING:root:Untyped Tensor found, assume it is float32
   WARNING:root:Untyped Tensor found, assume it is float32
   WARNING:root:Untyped Tensor found, assume it is float32
   WARNING:root:Untyped Tensor found, assume it is float32
   WARNING:root:Untyped Tensor found, assume it is float32
   WARNING:root:Untyped Tensor found, assume it is float32
   WARNING:root:Untyped Tensor found, assume it is float32
   WARNING:root:Untyped Tensor found, assume it is float32
   WARNING:root:Untyped Tensor found, assume it is float32
   WARNING:root:Untyped Tensor found, assume it is float32
   WARNING:root:Untyped Tensor found, assume it is float32
   WARNING:root:Untyped Tensor found, assume it is float32
   WARNING:root:Untyped Tensor found, assume it is float32
   WARNING:root:Untyped Tensor found, assume it is float32
   WARNING:root:Untyped Tensor found, assume it is float32
   Incompatible broadcast type TensorType([32, 784, 768], float32) and 
TensorType([1, 512, 768], float32)
   The type inference pass was unable to infer a type for this expression.
   This usually occurs when an operator call is under constrained in some way, 
check other reported errors for hints of what may of happened.
   note: run with `TVM_BACKTRACE=1` environment variable to display a backtrace.
   ```
   
   With this PR:
   
   ```
   Untyped Tensor found, assume it is float32
   Incompatible broadcast type TensorType([32, 784, 768], float32) and 
TensorType([1, 512, 768], float32)
   The type inference pass was unable to infer a type for this expression.
   This usually occurs when an operator call is under constrained in some way, 
check other reported errors for hints of what may of happened.
   note: run with `TVM_BACKTRACE=1` environment variable to display a backtrace.
   ```
   
   cc @masahi 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to