thanks for your opinions. So the majority voted for option (2) fat jars
that are ready to be used. I will create an Jira issue and prepare the
infrastructure for the first connector and first format.
Am 3/1/18 um 11:38 AM schrieb Fabian Hueske:
I agree, option (2) would be the easiest approach for the users.
2018-03-01 0:00 GMT+01:00 Rong Rong <walter...@gmail.com>:
Thanks for the initiating the SQL client effort. I agree with Xingcan's
points, adding to it (1) most of the user for SQL client would very likely
to have little Maven / build tool knowledge and (2) most likely the build
script would grow much complex in the future that makes it exponentially
hard for user to modify themselves.
On (3) the single "fat" jar idea, adding on to the dependency conflict
issue, another very common way I see is that users often want to maintain a
list of individual jars, such as a list of relatively-constant, handy UDFs
every time using the SQL client. They will probably need to package and
ship separately anyway. I was wondering if "download-and-drop-in" might be
a more straight forward approach?
On Tue, Feb 27, 2018 at 8:23 AM, Stephan Ewen <se...@apache.org> wrote:
I think one problem with the "one fat jar for all" is that some
dependencies clash in the classnames across versions:
- Kafka 0.9, 0.10, 0.11, 1.0
- Elasticsearch 2, 4, and 5
There are probably others as well...
On Tue, Feb 27, 2018 at 2:57 PM, Timo Walther <twal...@apache.org>
thank you for your feedback. Regarding (3) we also thought about that
this approach would not scale very well. Given that we might have fat
for multiple versions (Kafka 0.8, Kafka 0.6 etc.) such an all-in-one
solution JAR file might easily go beyond 1 or 2 GB. I don't know if
want to download that just for a combination of connector and format.
Am 2/27/18 um 2:16 PM schrieb Xingcan Cui:
thanks for your efforts. Personally, I think the second option would
better and here are my feelings.
(1) The SQL client is designed to offer a convenient way for users to
manipulate data with Flink. Obviously, the second option would be more
(2) The script will help to manage the dependencies automatically, but
with less flexibility. Once the script cannot meet the need, users
modify it themselves.
(3) I wonder whether we could package all these built-in connectors
formats into a single JAR. With this all-in-one solution, users don’t
to consider much about the dependencies.
On 27 Feb 2018, at 6:38 PM, Stephan Ewen <se...@apache.org> wrote:
My first intuition would be to go for approach #2 for the following
- I expect that in the long run, the scripts will not be that simple
maintain. We saw that with all shell scripts thus far: they start
and then grow with many special cases for this and that setup.
- Not all users have Maven, automatically downloading and configuring
Maven could be an option, but that makes the scripts yet more tricky.
- Download-and-drop-in is probably still easier to understand for
than the syntax of a script with its parameters
- I think it may actually be even simpler to maintain for us, because
it does is add a profile or build target to each connector to also
the fat jar.
- Storage space is no longer really a problem. Worst case we host the
jars in an S3 bucket.
On Mon, Feb 26, 2018 at 7:33 PM, Timo Walther <twal...@apache.org>
as you may know a first minimum version of FLIP-24  for the
Flink SQL Client has been merged to the master. We also merged
possibilities to discover and configure table sources without a
of code using string-based properties  and Java service provider
We are now facing the issue of how to manage dependencies in this
environment. It is different from how regular Flink projects are
(by setting up a a new Maven project and build a jar or fat jar).
a user should be able to select from a set of prepared connectors,
catalogs, and formats. E.g., if a Kafka connector and Avro format is
needed, all that should be required is to move a "flink-kafka.jar"
"flink-avro.jar" into the "sql_lib" directory that is shipped to a
cluster together with the SQL query.
The question is how do we want to offer those JAR files in the
see two options:
1) We prepare Maven build profiles for all offered modules and
shell script for building fat jars. A script call could look like
"./sql-client-dependency.sh kafka 0.10". It would automatically
what is needed and place the JAR file in the library folder. This
would keep our development effort low but would require Maven to be
and builds to pass on different environments (e.g. Windows).
2) We build fat jars for these modules with every Flink release that
be hostet somewhere (e.g. Apache infrastructure, but not Maven
This would make it very easy to add a dependency by downloading the
prepared JAR files. However, it would require to build and host
jars for every connector (and version) with every Flink major and
release. The size of such a repository might grow quickly.
What do you think? Do you see other options to make adding