comaniac commented on pull request #7387: URL: https://github.com/apache/tvm/pull/7387#issuecomment-772872031
> @comaniac I see some inconsistency in workload content. It can store > > * workload key->list of tensors (the way of tasks generated through auto_scheduler.extract_tasks) > * function name-> function > If we have the first case, the list of tensors can be serialized successfully using SaveJSON > If we have second case, exactly what happens in test, the function matmul_auto_scheduler_test is registered, then it canno tbe serialized using SaveJSON. Yes this is the current design: auto_scheduler supports workload registration from both compute function or DAG. > And I suspect it cannot be serialized without some processing. The current implementation doesn't serialize the function, because the receiver must have TVM deployed so the function is already there. We only need to look at the registry for the function by its name. This is even simpler because the only thing you need to serialize and pass around is the function name and arguments. > On the other hand, the process of measure assume creation of second process and passing the task for measure in the second process. The object should be serializable. I believe eveything work with cpython and functino by the same reason as with workload id/list of tensor. content of the task is not serialized, due to implementation of cpython process just forks and all objects are on the same place as in original process. > > I can add workaround - to verify the type of the content and if it is a list of tensors, apply SaveJSON, in other case do not call. And in deserialize I can also verify value and call LoadJSON only if it has string type. > Based on my above explanation, the only thing we need to do is just ignoring functions in the registry. cc @merrymercy ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected]
