Thank you for the help. Here's what I've learned and found issues.

* This is what I tried inside the my custom tuner.

        cfg = FallbackConfigEntity()
        func_name = self.task.name          // conv2d_nchw_winograd.cuda
        ref_log = autotvm.tophub.load_referernce_log( target.target_name, 
                                                      '1080ti', func_name)
        print(cfg)                          // Returns 'None'

* Automatic device model selection in line 229 did not work for my case: when 
selecting alternative model, current code does not check whether it contains 
data for the workload and pick the wrong model. So, currently, I picked the 
valid model manually. 

https://github.com/apache/incubator-tvm/blob/master/python/tvm/autotvm/tophub.py#L229

* Is there any reason for *FallbackConfigEntity*  to inherit *ConfigSpace*, not 
the *ConfigEntity*? In my code, *cfg.fallback_with_reference_log* updates *cfg* 
with reference configuration but since it is not a *ConfigEntity* object,  
tuner cannot handle this object. For example, *index* field does not exist with 
*FallbackConfigEntity* although it loads the best knobs from the reference. 
When I pass this object to the tuner, it complains the runtime error with 
error_no=4.

https://github.com/apache/incubator-tvm/blob/master/python/tvm/autotvm/task/space.py#L954





---
[Visit 
Topic](https://discuss.tvm.ai/t/autotvm-find-the-default-optimization-configuration-of-the-kernel/6090/8)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/6f7ac6b4cda35cd33e8c639043e29c8a62a34473cc8cfc2de552c4500290f656).

Reply via email to