comaniac edited a comment on pull request #5689:
URL: https://github.com/apache/incubator-tvm/pull/5689#issuecomment-635663626


   OK I can finally reproduce this case:
   
   ```
   fn (%x: Tensor[(1, 3, 224, 224), float32], %b: Tensor[(3), float32]) -> 
Tensor[(1, 3, 222, 222), float32] {
     %1 = fn (%x1: Tensor[(1, 3, 224, 224), float32], %b1: Tensor[(3), 
float32], Composite="pat") -> Tensor[(1, 3, 222, 222), float32] {
       %0 = nn.conv2d(%x1, meta[relay.Constant][0] /* ty=Tensor[(3, 3, 3, 3), 
float64] */ /* ty=Tensor[(3, 3, 3, 3), float64] */, padding=[0, 0, 0, 0]) /* 
ty=Tensor[(1, 3, 222, 222), float32] */;
       nn.bias_add(%0, %b1) /* ty=Tensor[(1, 3, 222, 222), float32] */
     };
     %1(%x, %b) /* ty=Tensor[(1, 3, 222, 222), float32] */
   }
   // meta data omitted. you can use show_meta_data=True to include meta data
   
   ```
   
   Combining wih the previous example, we know that old `MergeComposite` 
preserves constant node when the pattern is also a constant; otherwise it will 
lift constant nodes. In the DNNL codegen, the patterns are all specified as 
VarNode, so the case that matches a pattern with constant nodess never shows up 
in unit tests and that's why we missed it.
   
   However, I personally think this behavior is weird. General speaking, 
whatever we specify `var` or `const` in the pattern, we may need or not need 
constant lifting. It seems to me that this partition behavior should not be 
bind with patterns but should be a partition option.
   
   Anyways, I think the behavior of constant lifting should be discussed in 
another topic and does not relate to this PR.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to