QueensGambit edited a comment on issue #15640: Performance regression for MXNet 
1.5.0
URL: 
https://github.com/apache/incubator-mxnet/issues/15640#issuecomment-514365409
 
 
   > Also if the cp nodes depths values are different, does it mean the time 
will be different?
   
   It's basically the other way around. Usually, the movetime is a fixed given 
time and the `nodes` defines how many nodes have been created in this time 
within search tree. This makes the engine applicable to different hardware and 
time-controls. Higher `nodes` and `depths` are preferable.
   `cp` is a dynamically changing value evaluation which converges in theory to 
the true value for a particular position given infinite samples, here nodes. 
The nodes are reused for future board position if possible, therefore even 
though for cases when a network with a slightly slower nps predicted the same 
`bestmove`, a higher nps version has a better calibrated evaluation for 
possible future positions.
   
   In a simplified view, the `.asnumpy()` method is called and the prediction 
results of the neural network are assigned to a newly created node.
   
https://github.com/QueensGambit/CrazyAra/blob/master/DeepCrazyhouse/src/domain/agent/player/util/net_pred_service.py#L80
   
   Thus this speed measurement should be fairly independent from profiler 
update changes.
   Engines with higher nps are able to explore the position more deeply in the 
same time and have in result a higher playing strength.
   
   > In addition, if you are using mkl version, you can try these env var 
mentioned here:#15429 (comment)
   
   Thank you, I haven't tried optimizing all mkl hyperparameters so far, but 
only activated `MXNET_SUBGRAPH_BACKEND=MKLDNN` which gives a speed-up of ~16% 
for this particular model.
   
   I'm also planning to support low precision inference `float16`, `int8` and 
`Tensorrt` in the future.
   A C++ version of the engine will also be released soon.
   Here the tree-traversal and tree-management on CPU is significantly faster 
and the inference time of the neural network plays a much bigger factor.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to