AndrewZhaoLuo opened a new pull request #8862:
URL: https://github.com/apache/tvm/pull/8862


   The old workload key uses an md5 hash of the string representation of the 
computational DAG. This gives the question of what to do when we have a hash 
collision. Probably never going to happen but this is a principled fix and 
probably won't slow anything down.
   
   Finally we have a common bug where to construct the workload key, we append 
the size of input tensors to the DAG. However we were appending each integer 
part of the shape into one big list instead of appending the tuple. This fixes 
this too. 
   
   E.g.
   Before the workload key might be 
   ```"[\"7da4b9353f31499138ec976c527728b7\", 1, 1, 56, 56, 64, 2, 1, 3, 3, 64, 
32, 1, 2, 1, 1, 32, 1, 2, 56, 56, 32]"```
   
   Now a workload key might be 
   ```
   ["placeholder = PLACEHOLDER \nPaddedInput(i0, i1, i2, i3) = 
Op(tir.if_then_else)\nplaceholder = PLACEHOLDER \nConv2dOutput(nn, yy, xx, ff) 
+= (PaddedInput[nn, (yy + ry), (xx + rx), rc]*placeholder[ry, rx, rc, 
ff])\nplaceholder = PLACEHOLDER \nT_add(ax0, ax1, ax2, ax3) = 
(Conv2dOutput[ax0, ax1, ax2, ax3] + placeholder[ax0, ax1, ax2, 
ax3])\nplaceholder = PLACEHOLDER \nT_add(ax0, ax1, ax2, ax3) = (T_add[ax0, ax1, 
ax2, ax3] + placeholder[ax0, 0, 0, ax3])\nT_relu(ax0, ax1, ax2, ax3) = 
max(T_add[ax0, ax1, ax2, ax3], 0f)\n", [1, 56, 56, 64], [3, 3, 64, 64], [1, 56, 
56, 64], [1, 1, 1, 64], [1, 56, 56, 64]]
   ```
   
   I think the only downside is bigger log files.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to