Rogan Morrow created BEAM-12915:
-----------------------------------
Summary: No parallelism when using SDFBoundedSourceReader with
Flink
Key: BEAM-12915
URL: https://issues.apache.org/jira/browse/BEAM-12915
Project: Beam
Issue Type: Bug
Components: runner-flink
Affects Versions: 2.32.0
Reporter: Rogan Morrow
Background: I am using TFX pipelines with Flink as the runner for Beam (flink
session cluster using
[flink-on-k8s-operator|https://github.com/GoogleCloudPlatform/flink-on-k8s-operator]).
The Flink cluster has 2 taskmanagers with 16 cores each, and parallelism is
set to 32. TFX components call {{beam.io.ReadFromTFRecord}} to load data. I
have a dataset of TFRecords split across 160 files. When I try to run the
component, processing for all 160 files ends up in a single subtask in Flink,
i.e. the parallelism is effectively 1. See below images:
!https://i.imgur.com/ppba0AL.png!
!https://i.imgur.com/rSTFATn.png!
I have tried all manner of Beam/Flink options and different versions of
Beam/Flink but the behaviour remains the same.
Furthermore, the behaviour affects anything that uses
{{apache_beam.io.iobase.SDFBoundedSourceReader}}, e.g.
{{apache_beam.io.parquetio.ReadFromParquet}} also has the same issue. Either
I'm missing some obscure setting in my configuration, or this is a bug with the
Flink runner.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)