[
https://issues.apache.org/jira/browse/BEAM-4297?focusedWorklogId=105253&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-105253
]
ASF GitHub Bot logged work on BEAM-4297:
----------------------------------------
Author: ASF GitHub Bot
Created on: 23/May/18 18:35
Start Date: 23/May/18 18:35
Worklog Time Spent: 10m
Work Description: lukecwik commented on a change in pull request #5407:
[BEAM-4297] Streaming executable stage translation and operator for portable
Flink runner.
URL: https://github.com/apache/beam/pull/5407#discussion_r190355574
##########
File path:
runners/flink/src/main/java/org/apache/beam/runners/flink/FlinkStreamingPortablePipelineTranslator.java
##########
@@ -423,8 +432,133 @@ private void translateImpulse(
String id,
RunnerApi.Pipeline pipeline,
StreamingTranslationContext context) {
+ // TODO: Fail on stateful DoFns for now.
+ // TODO: Support stateful DoFns by inserting group-by-keys where necessary.
+ // TODO: Fail on splittable DoFns.
+ // TODO: Special-case single outputs to avoid multiplexing PCollections.
+ RunnerApi.Components components = pipeline.getComponents();
+ RunnerApi.PTransform transform = components.getTransformsOrThrow(id);
+ Map<String, String> outputs = transform.getOutputsMap();
+ RehydratedComponents rehydratedComponents =
+ RehydratedComponents.forComponents(components);
+
+ BiMap<String, Integer> outputMap =
+ FlinkPipelineTranslatorUtils.createOutputMap(outputs.keySet());
+ Map<String, Coder<WindowedValue<?>>> outputCoders = Maps.newHashMap();
+ for (String localOutputName : new TreeMap<>(outputMap.inverse()).values())
{
+ String collectionId = outputs.get(localOutputName);
+ Coder<WindowedValue<?>> windowCoder = (Coder)
instantiateCoder(collectionId, components);
+ outputCoders.put(localOutputName, windowCoder);
+ }
+
+ final RunnerApi.ExecutableStagePayload stagePayload;
+ try {
+ stagePayload =
RunnerApi.ExecutableStagePayload.parseFrom(transform.getSpec().getPayload());
+ } catch (IOException e) {
+ throw new RuntimeException(e);
+ }
+
+ String inputPCollectionId =
+ Iterables.getOnlyElement(transform.getInputsMap().values());
Review comment:
Unfortunately the way in which we construct coders and other properties of
the ExecutableProcessBundleDescriptor are done during execution and it would be
best if we somehow could make all these details be stable during pipeline
translation so during execution it doesn't change. Having Flink rely on calling
WireCoders.instantiateRunnerWireCoder(...) is an anti-pattern for encapsulation.
So if we need to construct the executable stage payload twice, we could make
the contract have it be stable regardless the number of times it is
constructed. I just want to push more of the input/output/coder/state/side
input information upto translation time instead of having it deep within
execution. In my opinion, only service (ApiServiceDescriptor) binding should
happen there.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 105253)
Time Spent: 2h 20m (was: 2h 10m)
> Flink portable runner executable stage operator for streaming
> -------------------------------------------------------------
>
> Key: BEAM-4297
> URL: https://issues.apache.org/jira/browse/BEAM-4297
> Project: Beam
> Issue Type: Task
> Components: runner-flink
> Reporter: Thomas Weise
> Assignee: Thomas Weise
> Priority: Major
> Labels: portability
> Time Spent: 2h 20m
> Remaining Estimate: 0h
>
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)