parveensania commented on code in PR #37840:
URL: https://github.com/apache/beam/pull/37840#discussion_r2979138155
##########
runners/google-cloud-dataflow-java/worker/src/main/java/org/apache/beam/runners/dataflow/worker/StreamingDataflowWorker.java:
##########
@@ -804,20 +809,37 @@ private static void
validateWorkerOptions(DataflowWorkerHarnessOptions options)
}
private static ChannelCache createChannelCache(
- DataflowWorkerHarnessOptions workerOptions, ComputationConfig.Fetcher
configFetcher) {
+ DataflowWorkerHarnessOptions workerOptions,
+ ComputationConfig.Fetcher configFetcher,
+ GrpcDispatcherClient dispatcherClient) {
ChannelCache channelCache =
ChannelCache.create(
(currentFlowControlSettings, serviceAddress) -> {
- // IsolationChannel will create and manage separate RPC channels
to the same
- // serviceAddress.
- return IsolationChannel.create(
- () ->
- remoteChannel(
- serviceAddress,
-
workerOptions.getWindmillServiceRpcChannelAliveTimeoutSec(),
- currentFlowControlSettings),
- currentFlowControlSettings.getOnReadyThresholdBytes());
+ ManagedChannel primaryChannel =
+ IsolationChannel.create(
Review Comment:
Addressed this. IsolationChannel now wraps FailoverChannel which creates two
channels per active RPC.
The original intent was to keep IsolationChannel unmodified (since it is
used by dispatcher client) and handle fallback at per-worker level. The new
ordering (IsolationChannel over FailoverChannel) changes the semantic to
per-RPC failover. Which means in case of connectivity issues, each RPC would
independently discover the failure and switch at different times, rather than
switching together in a coordinated way.
I do agree managing state at per RPC level seems to be less error prone, but
would like to callout this semantic change.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]