[
https://issues.apache.org/jira/browse/BEAM-12915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17525877#comment-17525877
]
Beam JIRA Bot commented on BEAM-12915:
--------------------------------------
This issue was marked "stale-P2" and has not received a public comment in 14
days. It is now automatically moved to P3. If you are still affected by it, you
can comment and move it back to P2.
> No parallelism when using SDFBoundedSourceReader with Flink
> -----------------------------------------------------------
>
> Key: BEAM-12915
> URL: https://issues.apache.org/jira/browse/BEAM-12915
> Project: Beam
> Issue Type: Bug
> Components: runner-flink
> Affects Versions: 2.32.0
> Reporter: Rogan Morrow
> Priority: P3
>
> Background: I am using TFX pipelines with Flink as the runner for Beam (flink
> session cluster using
> [flink-on-k8s-operator|https://github.com/GoogleCloudPlatform/flink-on-k8s-operator]).
> The Flink cluster has 2 taskmanagers with 16 cores each, and parallelism is
> set to 32. TFX components call {{beam.io.ReadFromTFRecord}} to load data,
> passing in a glob file pattern. I have a dataset of TFRecords split across
> 160 files. When I try to run the component, processing for all 160 files ends
> up in a single subtask in Flink, i.e. the parallelism is effectively 1. See
> below images:
> !https://i.imgur.com/ppba0AL.png!
> !https://i.imgur.com/rSTFATn.png!
>
> I have tried all manner of Beam/Flink options and different versions of
> Beam/Flink but the behaviour remains the same.
> Furthermore, the behaviour affects anything that uses
> {{apache_beam.io.iobase.SDFBoundedSourceReader}}, e.g.
> {{apache_beam.io.parquetio.ReadFromParquet}} also has the same issue. Either
> I'm missing some obscure setting in my configuration, or this is a bug with
> the Flink runner.
>
--
This message was sent by Atlassian Jira
(v8.20.7#820007)