lostluck opened a new issue, #23335: URL: https://github.com/apache/beam/issues/23335
### What needs to happen? At present the Go SDK harness responds to FnAPI Control in order of receipt, other than ProcessBundleRequests which are processed in their own thread. This may be causing contention on the control GRPC thread as all other messages are happening sequentially as it's also the "grpc" receipt thread. This includes handling "keepalive" responses on the GRPC connection and other maintenance messages. https://github.com/apache/beam/blob/master/sdks/go/pkg/beam/core/runtime/harness/harness.go#L168 Per Bundle Processing spec (Link TBD), the response ordering only needs to be strictly ordered based on the specific bundle instruction. Eg. requests to split, get progress, etc. A simple solution A would be to remove the response processing from the network thread, allowing that to drain, and having a separate processing thread. This avoids contention from the network thread to some buffered channel limit. A more elaborate solution B would be to spawn an additional "side car" goroutine for each in flight bundle to handle these ordered messages, similarly unblocking the network thread and allowing the different side messages to be processed in order. Each would get it's own buffered channel to some reasonable value, and allow the unblocking to occur. The concern with B is in "high concurrent bundle" cases which may be unnecessary memory overhead per bundle, but this is speculation and would need to be validated through profiling. In either case, the same single "response" channel will be used to serialize the responses to the runner half since that should remain a low return effort reply. (subject to profiling). cc: @lukecwik @lostluck ### Issue Priority Priority: 2 ### Issue Component Component: sdk-go -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
