sjain58 opened a new pull request, #15708:
URL: https://github.com/apache/tvm/pull/15708

   Simplify Conv->bias_add->mul->add to Conv->bias_add sequence if one of the 
inputs to Conv, bias_add, mul and add are constant scalars.
   
   def @main(%q1: Tensor[(1, 3, 224, 224), float32]) {
   %0 = nn.conv2d(%q1, meta[relay.Constant][0], padding=[3, 3, 3, 3], 
channels=64, kernel_size=[7, 7]);
   %1 = nn.bias_add(%0, meta[relay.Constant][1]);
   %2 = multiply(%1, meta[relay.Constant][2]);
   add(%2, meta[relay.Constant][3])
   }
   
   Replace with
   def @main(%q1: Tensor[(1, 3, 224, 224), float32]) {
   %0 = reshape(meta[relay.Constant][1], newshape=[64, 1, 1, 1]);
   %1 = multiply(meta[relay.Constant][0], %0);
   %2 = reshape(meta[relay.Constant][2], newshape=[64, 1, 1]);
   %3 = multiply(%2, meta[relay.Constant][1]);
   %4 = add(%3, meta[relay.Constant][3]);
   %5 = nn.conv2d(%q1, %1, padding=[3, 3, 3, 3], channels=64, kernel_size=[7, 
7]);
   %6 = reshape(%4, newshape=[64]);
   nn.bias_add(%5, %6)
   }
   
   res[p,q,r,s] = ({SUM{i=[0,c-1], j=[0,kh-1], k=[0,kw-1]}(a[p,i,r+j,s+k] * 
W[q,i,j,k])} + b[q]) * c1[q] + c2[q]
   res[p,q,r,s] = {SUM{i=[0,c-1], j=[0,kh-1], k=[0,kw-1]}(a[p,i,r+j,s+k] * 
W[q,i,j,k])} * c1[q] + b[q]c1[q] + c2[q]
   res[p,q,r,s] = Conv2d(a, Wc1) + bias_add(b*c1+c2)
   
   In the above, %1, %3, %4 are constants and can be folded, so we're left with 
2 ops, as opposed to the original 4 ops.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to