masahi commented on PR #13561: URL: https://github.com/apache/tvm/pull/13561#issuecomment-1339843228
The main use case of this param is for tuning on a high-core system shared by many users. Currently if one user starts tuning, it occupies all CPU resources, which disrupts other users. So the goal is to limit the number of cores used by MS throughout the tuning process (evo search, post order apply, XGB training, builder / runner). Also, `tune_tir` API has `num_threads` param as well. https://github.com/apache/tvm/blob/6780c9f87db6620409f8f58c2c2925c7bd7b6681/python/tvm/meta_schedule/tir_integration.py#L57 > I would suggest using a better name because num_threads can be confusing Agreed, but that's what `TuneContext` calls... I can replace `num_threads` in the high-level API with `max_workers` or something, and initialize `TuneContext` by `num_threads=max_workers`. If people this is better I can do that, otherwise I'd keep the existing convention. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
