Hi Dian,
Thanks a lot for your input. That’s a valid solution. We avoid using fat jars
in Java API, because it easily leads to class conflicts. But PyFlink is like
SQL API, user-imported Java dependencies are comparatively rare, so fat jar is
a proper choice.
Best,
Paul Lam
> 2021年12月14日 19:
Hi Paul,
For connectors(including Kafka), it's recommended to use the fat jar which
contains the dependencies. For example, for kafka, you could use
https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-kafka_2.11/1.14.0/flink-sql-connector-kafka_2.11-1.14.0.jar
Regards,
Dian
Hi!
I’m trying out PyFlink and looking for the best practice to manage Java
dependencies.
The docs recommends to use ‘pipeline-jars’ configuration or command line
options to specify jars for a PyFlink job. However, PyFlink users may not know
what Java dependencies is required. For example, a