Hecmay opened a new issue #6634:
URL: https://github.com/apache/incubator-tvm/issues/6634


   ## Environment 
   * Ubuntu 18.04.5 LTS
   * CUDA Version: 11.0. 
   * GPU: RTX 2080 Ti. Compute Compatibility : 7.5
   
   ## Description
   
   I created a simple MatMul tensor expression in C++, and want to use Anosr to 
search the optimal schedule for it. Here is the skeleton of the program that I 
am using. The `args` is an array of tensor inputs and output of the MatMul op.  
   ```c++
     // Create DAG and search task
     const auto& dag = tvm::auto_scheduler::ComputeDAG(args);
     auto task = SearchTask(dag, "test", Target("cuda"), Target("llvm"), 
Optional<HardwareParams>());
   
     // Create tuning options and search policy 
     auto options = TuningOptions(num_measure_trials, early_stopping, 
                                  num_measures_per_round, verbose, builder, 
runner,
                                  Optional<Array<MeasureCallback>>());
     auto policy = SketchPolicy(task, cost_model, params, seed, verbose, 
callbacks);
   
     // Launch Ansor 
     std::pair<te::Schedule, Array<te::Tensor>> res = AutoSchedule(policy, 
options);
   ```
   
   When running the program, `measure.py` complains that the GPU architecture 
cannot be found. The error message is as followed. I tried some solutions in 
this link: 
https://discuss.tvm.apache.org/t/solved-compile-error-related-to-autotvm/804/11.
 But none of them worked in my case. 
   
   
   ```shell
   ------------------------------------------------------------
   -------------------------  [ Search ]
   ------------------------------------------------------------
   Generate Sketches               #s: 1
   Sample Initial Population       #s: 744 fail_ct: 3352   Time elapsed: 0.39
   GA Iter: 0      Max score: 0.0000       Min score: 0.0000       #Pop: 744    
   #M+: 0  #M-: 0
   GA Iter: 5      Max score: 0.0000       Min score: 0.0000       #Pop: 2048   
   #M+: 1446       #M-: 89
   GA Iter: 10     Max score: 0.0000       Min score: 0.0000       #Pop: 2048   
   #M+: 1573       #M-: 94
   EvolutionarySearch              #s: 128 Time elapsed: 3.50
   ------------------------------------------------------------
   -------------------------  [ Measure ]
   ------------------------------------------------------------
   Get 10 programs for measure. (This may take a while)
   .E.E.E.E.E.E.E.E.E.E
   ==================================================
   No: 1   GFLOPS: 0.00 / 0.00     results: 
MeasureResult(error_type:CompileHostError, error_msg:Traceback (most recent 
call last):
     File "/home/sx/dlcb/build/tvm/python/tvm/auto_scheduler/measure.py", line 
516, in timed_func
       sch, args, target=task.target, target_host=task.target_host
     File "/home/sx/dlcb/build/tvm/python/tvm/driver/bu
   ...
   et_arch)
     File "/home/sx/dlcb/build/tvm/python/tvm/contrib/nvcc.py", line 71, in 
compile_cuda
       raise ValueError("arch(sm_xy) is not passed, and we cannot detect it 
from env")
   ValueError: arch(sm_xy) is not passed, and we cannot detect it from env
   , all_cost:0.02, Tstamp:1601955085.48)
   ==================================================
   ```
   
   I was able to run the Ansor tutorial, so I guess my env settings should be 
correct. I also tried to specify the architecture string (i.e. `sm_75` in my 
case). Ansor will throw the following error when I do so:
   ```shell
   ==================================================
   No: 4   GFLOPS: 0.00 / 0.00     results: 
MeasureResult(error_type:RuntimeDeviceError, error_msg:Traceback (most recent 
call last):
     File "/home/sx/dlcb/build/tvm/python/tvm/auto_scheduler/measure.py", line 
672, in timed_func
       ndarray.empty(get_const_tuple(x.shape), x.dtype, ctx) for x in 
build_res.args
     File "/home/sx/dlcb/build/tvm/py
   ...
   ] (0) /home/sx/dlcb/build/tvm/build/libtvm.so(+0x1791844) [0x7fd1ecfb3844]
     File "/home/sx/dlcb/build/tvm/src/runtime/cuda/cuda_device_api.cc", line 
115
   CUDA: Check failed: e == cudaSuccess || e == cudaErrorCudartUnloading: 
initialization error
   , all_cost:0.46, Tstamp:1601956508.32)
   ==================================================
   ```


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to