zxybazh commented on PR #13354: URL: https://github.com/apache/tvm/pull/13354#issuecomment-1310960641
Yeah from @mkatanbaf I heard it's a really small workload with 64 flops, so if we have 100 GLOPS computing speed it would result in 0.00064 micro second of run time (which could be too small to capture during rpc benchmarking). The tuning worked but the number is so small it triggered error in XGBoost model. I think it's simply unecessary to count such outliers and removing it from the cost model training data would be good enough. We can always focus on the time consuming tasks and optimize on them. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
