soiferj commented on a change in pull request #4465: [AutoTVM] Tune softmax
CUDA schedule
URL: https://github.com/apache/incubator-tvm/pull/4465#discussion_r356211210
##########
File path: topi/python/topi/cuda/softmax.py
##########
@@ -52,13 +60,22 @@ def schedule_softmax(outs):
raise ValueError('Tag is expected to be softmax_output or
log_softmax_output. \
Got {0}'.format(op_tag))
+ # create tuning space
+ max_num_threads =
tvm.target.current_target(allow_none=False).max_num_threads
+ possible_num_thread = get_powers_of_two_in_range(32, max_num_threads)
+ cfg.define_knob("num_thread", possible_num_thread)
Review comment:
That's an interesting point - on one hand, we want to auto-tune as many
areas as possible to really get the best configuration. On the other hand, we
don't want the tuning space to explode and tuning to take several hours.
How about for now, I'll use the same thread num for each stage. What do you
think?
```
s[softmax].split(softmax.op.axis[1], nparts=num_thread)
s[max_elem].split(max_elem.op.axis[1], nparts=num_thread)
s[exp].split(exp.op.axis[1], nparts=num_thread)
s[expsum].split(expsum.op.axis[1], nparts=num_thread)
```
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services