Hi,
Just FYI, the similar things works on a different image with the one I
built using my company’s image as base image. I’ve only replaced the base
image with ubuntu. But given that the error log is completely not helpful,
it’s really hard for me to continue debugging on the issue though.
The
Hi XQ,
The code is simplified from my previous work and thus it is still using the
old version. But I've tested with Beam 2.54.0 and the code still works (I
mean using my company's image.) If this is running well in your linux, I
guess there could be something related to how I build the docker
Thanks. It works after specifying the output type.
On Mon, 18 Mar 2024 at 01:44, XQ Hu via user wrote:
> Here is what I did including how I setup the portable runner with Flink
>
> 1. Start the local Flink cluster
> 2. Start the Flink job server and point to that local cluster: docker run
>
I cloned your repo on my Linux machine, which is super useful to run. Not
sure why you use Beam 2.41 but anyway, I tried this on my Linux machine:
python t.py \
--topic test --group test-group --bootstrap-server localhost:9092 \
--job_endpoint localhost:8099 \
--artifact_endpoint
Here is what I did including how I setup the portable runner with Flink
1. Start the local Flink cluster
2. Start the Flink job server and point to that local cluster: docker run
--net=host apache/beam_flink1.16_job_server:latest
--flink-master=localhost:8081
3. I use these pipeline options in
Hi,
I have an issue when setting up a POC of Python SDK with Flink runner to
run in docker-compose. The python worker harness was not returning any
error but:
```
python-worker-harness-1 | 2024/03/17 07:10:17 Executing: python -m
apache_beam.runners.worker.sdk_worker_main
Hello,
The pipeline runs in host while host.docker.internal would only be resolved
on the containers that run with the host network mode. I guess the pipeline
wouldn't be accessible to host.docker.internal and fails to run.
If everything before ReadFromKafka works successfully, a docker