damccorm opened a new issue, #25286: URL: https://github.com/apache/beam/issues/25286
### What happened? Right now in RunInference, when loading large models from remote locations (e.g. gcs), we timeout our request and eventually kill the work item/try a new one. We should have some mechanism for loading large remote models without timing out. Note that the recommended path for large models will mostly be building a custom container, so this isn't a _huge_ deal, but that doesn't play well with model updates or pulling from model registries. ### Issue Priority Priority: 3 (minor) ### Issue Components - [X] Component: Python SDK - [ ] Component: Java SDK - [ ] Component: Go SDK - [ ] Component: Typescript SDK - [ ] Component: IO connector - [ ] Component: Beam examples - [ ] Component: Beam playground - [ ] Component: Beam katas - [ ] Component: Website - [ ] Component: Spark Runner - [ ] Component: Flink Runner - [ ] Component: Samza Runner - [ ] Component: Twister2 Runner - [ ] Component: Hazelcast Jet Runner - [X] Component: Google Cloud Dataflow Runner -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
