yuwtennis commented on issue #30663:
URL: https://github.com/apache/beam/issues/30663#issuecomment-3586000419

   I just suffered same problem and able to reproduce in `2.69.0` using 
_PortableRunner_ w/ Apache Flink backend.
   
   It looks like the behavior in the 
[ExecutableStage](https://flink.apache.org/2020/02/22/apache-beam-how-beam-runs-on-top-of-flink/#how-are-beam-programs-translated-in-language-portability)
 is different with _bounded_ and _unbounded data_ source. 
   
   For case with _bounded_ data source,  the `--environment_type` is respected 
. Below is the snippet from the Flink taskmanager .
   
   [sample 
run](https://github.com/yuwtennis/apache-beam-pipeline-apps/tree/b2d58b1aec2156f4925711593e95daac039a52c2/python#portable-runner-w-flink)
   
   ```
   2025-11-27 12:13:22,207 DEBUG 
org.apache.beam.runners.fnexecution.environment.ExternalEnvironmentFactory [] - 
Requesting worker ID 1-1
   ```
   
   However, for unbounded data source, following the stacktrace it is actually 
[hardcoded](https://github.com/apache/beam/blob/d9c1e4e4f6cc23dd2c873aef66d1c097883dbb53/runners/java-fn-execution/src/main/java/org/apache/beam/runners/fnexecution/control/DefaultJobBundleFactory.java#L235-L260)
 using **DOCKER** environment.
   
   [sample 
run](https://github.com/yuwtennis/apache-beam-pipeline-apps/tree/b2d58b1aec2156f4925711593e95daac039a52c2/python#portablerunner-w-flink-1)
   
   ```
   2025-11-27 12:25:37,368 DEBUG 
org.apache.beam.runners.fnexecution.environment.DockerEnvironmentFactory [] - 
Creating Docker Container with ID 1-1
   ```
   
   Below are the logs.
   
   [bounded.txt](https://github.com/user-attachments/files/23798151/bounded.txt)
   
[unbounded.txt](https://github.com/user-attachments/files/23798150/unbounded.txt)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to