[
https://issues.apache.org/jira/browse/BEAM-7730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16897895#comment-16897895
]
David Moravek commented on BEAM-7730:
-------------------------------------
Successfully tested 1.9 runner with an internal pipeline. Everything works as
expected, when I allocate only one slot per task manager. In case of multiple
slots, pipeline fails during de-serialization outputs of ParDo with sideoutputs.
The issue seems like one of the coder's reads more data from de-serialization
stream than it should and we end up with BufferUnderflow. This is probably a
bug in Beam's KryoCoder and has nothing to do with flink runner (it happens in
prior versions too).
Flink PR's are both merged.
Portable test case is still the last one failing, will further investigate
after the issue mentioned above.
> Add Flink 1.9 build target and Make FlinkRunner compatible with Flink 1.9
> -------------------------------------------------------------------------
>
> Key: BEAM-7730
> URL: https://issues.apache.org/jira/browse/BEAM-7730
> Project: Beam
> Issue Type: New Feature
> Components: runner-flink
> Reporter: sunjincheng
> Assignee: sunjincheng
> Priority: Major
> Fix For: 2.16.0
>
>
> Apache Flink 1.9 will coming and it's better to add Flink 1.9 build target
> and make Flink Runner compatible with Flink 1.9.
> I will add the brief changes after the Flink 1.9.0 released.
> And I appreciate it if you can leave your suggestions or comments!
--
This message was sent by Atlassian JIRA
(v7.6.14#76016)