cbalint13 commented on PR #14468:
URL: https://github.com/apache/tvm/pull/14468#issuecomment-1496426370

   @tqchen 
   
   > Given that we are doing cost model. I am not sure if binarization is the 
best approach here. 
   
   * In a short discussion here this was suggested: 
https://github.com/dmlc/xgboost/pull/9007#issuecomment-1494739265
   * Looking through [changes](https://github.com/dmlc/xgboost/pull/8931) , the 
old behaviour also clamped somehow the values (not clear for me if to pure 
binary).
   
   > Can you dump out the labels and check the current assigned behavior?
   
   * Sure, attached is a small script + dmatrix dump: 
[tvm-xgboost-dmatrix.zip](https://github.com/apache/tvm/files/11151803/tvm-xgboost-dmatrix.zip)
 with [results.txt](https://github.com/apache/tvm/files/11151857/results.txt)
   
   * This was captured from a real tvm autotunning process targeting a rk3399 
opencl device.
   
   > 
   > Likely we might want to move away from the MAP metric, and use other 
metric instead, either regression metric or pair-wise ranking.
   
   * Apparently this proposal works, tunning finds good kernels, but the real 
impact is hard to measure (on personal side).
   
   
   Another quick idea for now is to add condition of binarization to xgboost 
>=1.7.5  version, keeping the old behaviour.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to