[
https://issues.apache.org/jira/browse/BEAM-11787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17282669#comment-17282669
]
Yifan Mai commented on BEAM-11787:
----------------------------------
Looks like there was a behavior change for Dataflow Python streaming at
https://github.com/apache/beam/pull/13884
Before the change, when streaming was enabled, we never call
translations.optimize_pipeline on self.proto_pipeline. After the change, when
streaming is enabled, we never call translations.optimize_pipeline on
self.proto_pipeline with phases=[]. Apparently this is not a no op and breaks
streaming somehow.
> beam_PostCommit_Py_VR_Dataflow is perma red
> -------------------------------------------
>
> Key: BEAM-11787
> URL: https://issues.apache.org/jira/browse/BEAM-11787
> Project: Beam
> Issue Type: Bug
> Components: sdk-py-core
> Reporter: Chamikara Madhusanka Jayalath
> Assignee: Yifan Mai
> Priority: P1
> Fix For: 2.28.0
>
>
> [https://ci-beam.apache.org/job/beam_PostCommit_Py_VR_Dataflow/]
> Seems like this first started failing when following went in.
> [https://github.com/apache/beam/commit/5ea504de2eb187dca733f6087aea780dc781040d]
> [https://ci-beam.apache.org/job/beam_PostCommit_Py_VR_Dataflow/7531/]
>
> It's possible we have to build internal Dataflow worker containers.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)