[
https://issues.apache.org/jira/browse/BEAM-8816?focusedWorklogId=356083&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-356083
]
ASF GitHub Bot logged work on BEAM-8816:
----------------------------------------
Author: ASF GitHub Bot
Created on: 09/Dec/19 11:15
Start Date: 09/Dec/19 11:15
Worklog Time Spent: 10m
Work Description: mxm commented on pull request #10313: [BEAM-8816]
Option to load balance bundle processing w/ multiple SDK workers
URL: https://github.com/apache/beam/pull/10313#discussion_r355390802
##########
File path:
runners/java-fn-execution/src/main/java/org/apache/beam/runners/fnexecution/control/DefaultJobBundleFactory.java
##########
@@ -276,38 +341,53 @@ public RemoteBundle getBundle(
throws Exception {
// TODO: Consider having BundleProcessor#newBundle take in an
OutputReceiverFactory rather
// than constructing the receiver map here. Every bundle factory will
need this.
- ImmutableMap.Builder<String, RemoteOutputReceiver<?>> outputReceivers =
- ImmutableMap.builder();
- for (Map.Entry<String, Coder> remoteOutputCoder :
- processBundleDescriptor.getRemoteOutputCoders().entrySet()) {
- String outputTransform = remoteOutputCoder.getKey();
- Coder coder = remoteOutputCoder.getValue();
- String bundleOutputPCollection =
- Iterables.getOnlyElement(
- processBundleDescriptor
- .getProcessBundleDescriptor()
- .getTransformsOrThrow(outputTransform)
- .getInputsMap()
- .values());
- FnDataReceiver outputReceiver =
outputReceiverFactory.create(bundleOutputPCollection);
- outputReceivers.put(outputTransform, RemoteOutputReceiver.of(coder,
outputReceiver));
- }
- if (environmentExpirationMillis == 0) {
- return processor.newBundle(outputReceivers.build(),
stateRequestHandler, progressHandler);
+ if (environmentExpirationMillis == 0 && !loadBalanceBundles) {
+ return currentClient.processor.newBundle(
+ getOutputReceivers(currentClient.processBundleDescriptor,
outputReceiverFactory)
+ .build(),
+ stateRequestHandler,
+ progressHandler);
}
- final WrappedSdkHarnessClient client =
-
environmentCaches.get(environmentIndex).getUnchecked(executableStage.getEnvironment());
- client.ref();
+ final LoadingCache<Environment, WrappedSdkHarnessClient> currentCache;
+ if (loadBalanceBundles) {
Review comment:
Just an idea: Instead of conditionally switching here, could we add an
interface to handle the client selection? We could have two implementations,
one regular, one for load balancing. Would make the code easier to read here
(necessary data structures would go into the implementation) and also avoid
runtime checks.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 356083)
> Load balance bundle processing w/ multiple SDK workers
> ------------------------------------------------------
>
> Key: BEAM-8816
> URL: https://issues.apache.org/jira/browse/BEAM-8816
> Project: Beam
> Issue Type: Improvement
> Components: runner-core, runner-flink
> Affects Versions: 2.17.0
> Reporter: Thomas Weise
> Assignee: Thomas Weise
> Priority: Major
> Labels: portability
> Time Spent: 0.5h
> Remaining Estimate: 0h
>
> We found skewed utilization of SDK workers causing excessive latency with
> Streaming/Python/Flink. (Remember that with Python, we need to execute
> multiple worker processes on a machine instead of relying on threads in a
> single worker, which requires the runner to make a decision to which worker
> to give a bundle for processing.)
> The Flink runner has knobs to influence the number of records per bundle and
> the maximum duration for a bundle. But since the runner does not understand
> the cost of individual records, it is possible for the duration of bundles to
> fluctuate significantly due to skew in processing time of individual records.
> And unless the bundle size is 1, multiple expensive records could be
> allocated to a single bundle before the cutoff time is reached. We notice
> this with a pipeline that executes models, but there are other use cases
> where the cost of individual records can vary significantly.
> Additionally, the Flink runner establishes the association between the
> subtask managing an executable stage and the SDK worker during
> initialization, lasting for the duration of the job. In other words, bundles
> for the same executable stage will always be sent to the same SDK worker.
> When the execution time skew is tied to specific keys (stateful processing),
> it further aggravates the issue.
> [https://lists.apache.org/thread.html/59c02d8b8ea849c158deb39ad9d83af4d8fcb56570501c7fe8f79bb2@%3Cdev.beam.apache.org%3E]
> Long term this problem can be addressed with SDF. Till then, an (optional)
> runner controlled balancing mechanism has shown to improve the performance in
> internal testing.
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)