Hello,

The pipeline runs in host while host.docker.internal would only be resolved
on the containers that run with the host network mode. I guess the pipeline
wouldn't be accessible to host.docker.internal and fails to run.

If everything before ReadFromKafka works successfully, a docker container
will be launched with the host network mode so that
host.docker.internal:9092 can be resolved inside the container. As far as
I've checked, however, it fails when I start a flink cluster on docker and
I had to rely on a local flink cluster. If you'd like to try to use docker,
you should have docker installed on your custom docker image and
volume-map /var/run/docker.sock to the flink task manager. Otherwise, it
won't be able to launch a Docker container for reading kafka messages.

Cheers,
Jaehyeon


On Sun, 17 Mar 2024 at 18:21, Lydian Lee <tingyenlee...@gmail.com> wrote:

> Hi,
>
> I have an issue when setting up a POC of  Python SDK with Flink runner to
> run in docker-compose.  The python worker harness was not returning any
> error but:
> ```
> python-worker-harness-1  | 2024/03/17 07:10:17 Executing: python -m
> apache_beam.runners.worker.sdk_worker_main
> python-worker-harness-1  | 2024/03/17 07:10:24 Python exited: <nil>
> ```
> and dead.  The error message seems totally unuseful, and I am wondering if
> there's a way to make the harness script show more debug logging.
>
> I started my harness via:
> ```
> /opt/apache/beam/boot --worker_pool
> ```
> and configure my script to use the harness
> ```
> python docker/src/example.py \
>   --topic test --group test-group --bootstrap-server
> host.docker.internal:9092 \
>   --job_endpoint host.docker.internal:8099 \
>   --artifact_endpoint host.docker.internal:8098 \
>   --environment_type=EXTERNAL \
>   --environment_config=host.docker.internal:50000
> ```
> The full settings is available in:
> https://github.com/lydian/beam-python-flink-runner-examples
> Thanks for your help
>
>

Reply via email to