adobay opened a new issue #5369: [AutoTVM]Tune graph throws exception
URL: https://github.com/apache/incubator-tvm/issues/5369
 
 
   I tuned my model for x86 cpu according to [this 
tutorial](https://tvm.apache.org/docs/tutorials/autotvm/tune_relay_x86.html?highlight=tune_graph),
 and an exception occurred. Can someone help?
   
   This is my code, which is almost the same as the tutorial.
   ```
   def tune_graph(graph, records, opt_sch_file, use_DP=False):
       target_op = [relay.op.get('nn.dense',)]
       Tuner = DPTuner if use_DP else PBQPTuner
       executor = Tuner(graph, shape_dict, records, target_op, target)
       executor.benchmark_layout_transform(min_exec_num=2000)
       executor.run()
       executor.write_opt_sch2record_file(opt_sch_file)
   ```
   
   This is the exception.
   ```
   WARNING:autotvm:Cannot find config for target=llvm -device=tracing, 
workload=('dense_nopack.x86', ('TENSOR', (1, 93), 'float32'), ('TENSOR', (256, 
93), 'float32'), None, 'float32'). A fallback configuration is used, which may 
bring great performance regression.
   WARNING:autotvm:Cannot find config for target=llvm -device=tracing, 
workload=('dense_nopack.x86', ('TENSOR', (1, 256), 'float32'), ('TENSOR', (256, 
256), 'float32'), None, 'float32'). A fallback configuration is used, which may 
bring great performance regression.
   WARNING:autotvm:Cannot find config for target=llvm -device=tracing, 
workload=('dense_nopack.x86', ('TENSOR', (1, 372), 'float32'), ('TENSOR', (256, 
372), 'float32'), None, 'float32'). A fallback configuration is used, which may 
bring great performance regression.
   WARNING:autotvm:Cannot find config for target=llvm -device=tracing, 
workload=('dense_nopack.x86', ('TENSOR', (1, 768), 'float32'), ('TENSOR', (256, 
768), 'float32'), None, 'float32'). A fallback configuration is used, which may 
bring great performance regression.
   WARNING:autotvm:Cannot find config for target=llvm -device=tracing, 
workload=('dense_nopack.x86', ('TENSOR', (6, 352), 'float32'), ('TENSOR', (256, 
352), 'float32'), None, 'float32'). A fallback configuration is used, which may 
bring great performance regression.
   WARNING:autotvm:Cannot find config for target=llvm -device=tracing, 
workload=('dense_nopack.x86', ('TENSOR', (6, 256), 'float32'), ('TENSOR', (256, 
256), 'float32'), None, 'float32'). A fallback configuration is used, which may 
bring great performance regression.
   WARNING:autotvm:Cannot find config for target=llvm -device=tracing, 
workload=('dense_nopack.x86', ('TENSOR', (6, 256), 'float32'), ('TENSOR', (3, 
256), 'float32'), None, 'float32'). A fallback configuration is used, which may 
bring great performance regression.
   Traceback (most recent call last):
     File 
"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/IPython/core/interactiveshell.py",
 line 3331, in run_code
       exec(code_obj, self.user_global_ns, self.user_ns)
     File "<ipython-input-2-56733eed6075>", line 1, in <module>
       
runfile('/Users/adobay/Documents/code/banma_algorithm_deep_learning/auto_tvm.py',
 wdir='/Users/adobay/Documents/code/banma_algorithm_deep_learning')
     File 
"/Applications/PyCharm.app/Contents/helpers/pydev/_pydev_bundle/pydev_umd.py", 
line 197, in runfile
       pydev_imports.execfile(filename, global_vars, local_vars)  # execute the 
script
     File 
"/Applications/PyCharm.app/Contents/helpers/pydev/_pydev_imps/_pydev_execfile.py",
 line 18, in execfile
       exec(compile(contents+"\n", file, 'exec'), glob, loc)
     File 
"/Users/adobay/Documents/code/banma_algorithm_deep_learning/auto_tvm.py", line 
232, in <module>
       tune_and_evaluate(tuning_option, params)
     File 
"/Users/adobay/Documents/code/banma_algorithm_deep_learning/auto_tvm.py", line 
154, in tune_and_evaluate
       tune_graph(mod["main"], log_file, graph_opt_sch_file)
     File 
"/Users/adobay/Documents/code/banma_algorithm_deep_learning/auto_tvm.py", line 
103, in tune_graph
       executor = Tuner(graph, shape_dict, records, target_op, target)
     File 
"/Users/adobay/Documents/code/tvm/tvm/python/tvm/autotvm/graph_tuner/pbqp_tuner.py",
 line 37, in __init__
       super(PBQPTuner, self).__init__(*args, **kwargs)
     File 
"/Users/adobay/Documents/code/tvm/tvm/python/tvm/autotvm/graph_tuner/base_graph_tuner.py",
 line 156, in __init__
       self._fetch_cfg()
     File 
"/Users/adobay/Documents/code/tvm/tvm/python/tvm/autotvm/graph_tuner/base_graph_tuner.py",
 line 215, in _fetch_cfg
       infer_layout_func = get_infer_layout(node_entry["topi_op"][0])
     File 
"/Users/adobay/Documents/code/tvm/tvm/python/tvm/autotvm/graph_tuner/base_graph_tuner.py",
 line 44, in get_infer_layout
       raise ValueError("Cannot find infer layout for task %s" % task_name)
   ValueError: Cannot find infer layout for task dense_nopack.x86
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to