hzfan commented on a change in pull request #7321:
URL: https://github.com/apache/tvm/pull/7321#discussion_r563840338
##########
File path: tests/python/unittest/test_te_autodiff.py
##########
@@ -343,7 +343,23 @@ def test_reduction_init():
check_grad(B, A0)
+def test_stable():
+ X = te.placeholder((32, 512, 16, 16), name="X")
+ W = te.placeholder((1024, 512, 1, 1), name="W")
+ strides, padding, dilation = 2, 0, 1
+ R = topi.nn.conv2d(X, W, strides, padding, dilation)
+ ones = topi.full_like(R, 1.0)
+ grads = te.gradient(R, [X], head=ones)
+ dag = tvm.auto_scheduler.ComputeDAG(grads)
+ repeat = 100
+ for i in range(repeat):
+ grads = te.gradient(R, [X], head=ones)
+ new_dag = tvm.auto_scheduler.ComputeDAG(grads)
+ assert str(dag) == str(new_dag)
Review comment:
> Since auto_scheduler guarantees the DAG would be the same with the
same given compute, you don't need to involve auto_scheduler in this test.
Yes, I agree. I use auto_scheduler here only because it provides a hash_key
for TE level ir. Any ideas about how to compare two `Tensor`?
> I'm even not sure if we need this test because it seems cannot expose the
real problem. IIUC, the non-deterministic behavior comes from the use of
unordered_set, so you may still pass this pass when you're lucky even you break
something. If that happens, this test becomes flaky. But I'd like to hear
opinions from others.
Agree. I'm fine with removing the test.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]