areusch opened a new pull request #7566:
URL: https://github.com/apache/tvm/pull/7566


   Turns out we have no AutoTVM integration tests at main that actually assert 
on anything. Adding one and fixing my breakage.
   
   Still TODO: need to fix test_tuning_gpu, which has been failing for some 
time now. I don't know how to do this and it would be great if someone familiar 
with CUDA could debug the error:
   
   ```
   DEBUG    autotvm:tuner.py:163 No: 1     GFLOPS: 0.00/0.00       result: 
MeasureResult(costs=(InstantiationError('Traceback (most recent call last):\n  
[bt] (4) /workspace/build/libtvm.so(TVMFuncCall+0x61) [0x7fd90af1c2f1]\n  [bt] 
(3) /workspace/build/libtvm.so(+0x78e326) [0x7fd90a1eb326]\n  [bt] (2) 
/workspace/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule,
 tvm::transform::PassContext const&) const+0x3ed) [0x7fd90a1e885d]\n  [bt] (1) 
/workspace/build/libtvm.so(tvm::tir::transform::PrimFuncPassNode::operator()(tv
   m::IRModule, tvm::transform::PassContext const&) const+0x55b) 
[0x7fd90a627c7b]\n  [bt] (0) /workspace/build/libtvm.so(+0x14bc1eb) 
[0x7fd90af191eb]\n  File "/workspace/python/tvm/_ffi/_ctypes/packed_func.py", 
line 81, in cfun\n    rv = local_pyfunc(*pyargs)\n  File 
"/workspace/python/tvm/autotvm/measure/measure_methods.py", line 741, in 
verify_pass\n    raise InstantiationError("Skipped because of invalid gpu 
kernel")\ntvm.autotvm.task.space.InstantiationError: Skipped because of invalid 
gpu kernel',),), error_no=1, all_cost=0.03403472900390625, 
timestamp=1614703720.5951383)   [('tile_f', [-1, 32, 4, 2]), ('tile_y', [-1, 5, 
1, 1]), ('tile_x', [-1, 5, 1, 1]), ('tile_rc', [-1, 32, 2]), ('tile_ry', [-1, 
1, 3]), ('tile_rx', [-1, 3, 1]), ('auto_unroll_max_step', 1500), 
('unroll_explicit', 0)],None,4506777
   ```
   
   @jwfromm @csullivan @tqchen 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to