vfdff opened a new issue, #17984: URL: https://github.com/apache/tvm/issues/17984
Thanks for participating in the TVM community! We use https://discuss.tvm.ai for any general usage questions and discussions. The issue tracker is used for actionable items such as feature proposals discussion, roadmaps, and bug tracking. You are always welcomed to post on the forum first :smile_cat: Issues that are inactive for a period of time may get closed. We adopt this policy so that we won't lose track of actionable issues that may fall at the bottom of the pile. Feel free to reopen a new one if you feel there is an additional problem that needs attention when an old one gets closed. ### Expected behavior get the baseline performance for **tvm ansor schedule** with **USE_TVM_BASE=1 pytest -s test_ansor.py::test_train_dynT** according [DietCode](https://github.com/UofT-EcoSystem/DietCode/blob/MLSys2022_AE/README.md ), and the base tvm (v0.7) is built with **set(USE_LLVM OFF)/set(USE_CUDA "/usr/local/cuda")** ### Actual behavior > (tvm0.18_py310_zyd_Dietcode) root@j00595921debug2-cc95c9977-q752v:/home/zhongyunde/source/DietCode/ops/dense# USE_TVM_BASE=1 pytest -s test_ansor.py::test_train_dynT ``` args : list The positional arguments to the function call. """ temp_args = [] values, tcodes, num_args = _make_tvm_args(args, temp_args) ret_val = TVMValue() ret_tcode = ctypes.c_int() if ( _LIB.TVMFuncCall( self.handle, values, tcodes, ctypes.c_int(num_args), ctypes.byref(ret_val), ctypes.byref(ret_tcode), ) != 0 ): > raise get_last_ffi_error() E tvm._ffi.base.TVMError: Traceback (most recent call last): E 7: TVMFuncCall E 6: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::TypedPackedFunc<tvm::runtime::ObjectRef (tvm::auto_scheduler::SearchPolicy, tvm::auto_scheduler::TuningOptions)>::AssignTypedLambda<tvm::auto_scheduler::__mk_TVM3::{lambda(tvm::auto_scheduler::SearchPolicy, tvm::auto_scheduler::TuningOptions)#1}>(tvm::auto_scheduler::__mk_TVM3::{lambda(tvm::auto_scheduler::SearchPolicy, tvm::auto_scheduler::TuningOptions)#1}, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&) E 5: tvm::auto_scheduler::AutoSchedule(tvm::auto_scheduler::SearchPolicy, tvm::auto_scheduler::TuningOptions) E 4: tvm::auto_scheduler::SketchPolicyNode::Search(int, int, int, tvm::auto_scheduler::ProgramMeasurer) E 3: tvm::auto_scheduler::SketchPolicyNode::SearchOneRound(int, tvm::runtime::Array<tvm::auto_scheduler::State, void>*) E 2: tvm::auto_scheduler::SketchPolicyNode::GenerateSketches() E 1: tvm::auto_scheduler::RuleCrossThreadReduction::MeetCondition(tvm::auto_scheduler::SketchPolicyNode const&, tvm::auto_scheduler::State const&, int) const E 0: tvm::runtime::Optional<tvm::runtime::Array<tvm::tir::DynShapeVar, void> >::value() const E File "/home/zhongyunde/source/DietCode/tvm/include/tvm/runtime/container/optional.h", line 93 E TVMError: E --------------------------------------------------------------- E An error occurred during the execution of TVM. E For more information, please see: https://tvm.apache.org/docs/errors.html E --------------------------------------------------------------- E Check failed: (data_ != nullptr) is false: ../../tvm/python/tvm/_ffi/_ctypes/packed_func.py:237: TVMError ``` ### Environment Any environment details, such as: Operating System, TVM version, etc > Ubuntu 20.04.1 > tvm Version: 0.8.dev0 build with source (pip show tvm) ### Steps to reproduce ``` 1. Build the baseline tvm 0.8.dev0 2. export TVM_HOME=/home/zhongyunde/source/DietCode/tvm_base/ 3. USE_TVM_BASE=1 pytest -s test_ansor.py::test_train_dynT ``` ### Triage Please refer to the list of label tags [here](https://github.com/apache/tvm/wiki/Issue-Triage-Labels) to find the relevant tags and add them below in a bullet format (example below). * needs-triage -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
