merrymercy commented on a change in pull request #6512:
URL: https://github.com/apache/incubator-tvm/pull/6512#discussion_r491316167



##########
File path: tutorials/auto_scheduler/tune_matmul_x86.py
##########
@@ -155,19 +169,22 @@ def resume_search(task, log_file):
     sch, args = auto_scheduler.auto_schedule(task, search_policy, 
tuning_options=tune_option)
 
 
-# resume_search(task, "matmul.json")
+#resume_search(task, "matmul.json")
 
 ######################################################################
 # .. note::
 #   We cannot run the line above because of the conflict between
 #   python's multiprocessing and tvm's thread pool.
-#   After running a tvm generated binary (L112), the python's multiprocessing
-#   library will hang forever.
-#   You have to make sure that you don't run any tvm generated binaries before
-#   calling ansor's search. To run the L156 above, you should comment out 
L112-114.
+#   After running a tvm generated binary the python's multiprocessing library 
+#   will hang forever. You have to make sure that you don't run any tvm 
+#   generated binaries before calling auot-scheduler's search.
+#   To run the function above, you should comment out all code in 
+#   "Check correctness and evaluate performance" section.
 #
 #   You should be careful about this problem in your applications.
 #   There are other workarounds for this problem.
 #   For example, you can start a new thread/process (with the builtin python 
library
 #   threading or multiprocessing) and run the tvm binaries in the new 
thread/process.
 #   This provides an isolation and avoids the conflict in the main 
thread/process.
+#   You can also use :any:`auto_scheduler.measure.LocalRPCMeasureContext` for 
auto-scheduler,
+#   as shown in the GPU tutorial (:ref:`auto-scheduler-conv-gpu`).

Review comment:
       People are likely to get into this problem in their applications, so it 
is worth to let them aware of this problem in the tutorial.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to