damccorm opened a new issue, #21450: URL: https://github.com/apache/beam/issues/21450
Loading pytorch model as whole has some issues with pickling. Investigate it with running some experiments. If the model size is too large, the current implementation of the RunInference for PyTorch would fail because of memory limits. 1. We can pass the model class to the `load_model` of PyTorchModelLoader and load the model there. This wouldn't pickle the model object but would pickle the class and the model would be instantiated on the workers. Imported from Jira [BEAM-14368](https://issues.apache.org/jira/browse/BEAM-14368). Original Jira may contain additional context. Reported by: Anand Inguva. Subtask of issue #21435 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
