cbalint13 commented on PR #14468:
URL: https://github.com/apache/tvm/pull/14468#issuecomment-1496600164

   @tqchen ,
   
   > I think in this case we should change ranking loss to regression loss, use 
logistic regression so the values can still be used. binarization causes too 
much info loss
   
   Updates:
   
   1. The autoschedule (ansor) is not affected at all.
   2. The autotune ```reg (reg:linear)``` loss_type is not affected.
   3. Only autotune with ```rank (rank:pairwise)``` loss_type``` is affected.
   4. Only xgboost > 2.0.0-dev (as it presents itself via py API).
   
   I updated this PR code to do binarization **only in case**:
     - check if xgboost > 2 
     - check if rank:pairwise
   
   Updated here the code, the title, the first comment (barred out any 
erroneous info).
   I could imagine to leave this for **autotune** in a prepearing for what will 
be xgb > 2.0.0
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to