Hi Jun,

Thank you very much for your contribution.

I think a Bucketing File System Table Sink would be a great addition.

Our code contribution guidelines [1] recommend to discuss the design with
the community before opening a PR.
First of all, this ensures that the design is aligned with Flink's codebase
and the future features.
Moreover, it helps to find a committer who can help to shepherd the PR.

Something that is always a good idea is to split a contribution in multiple
smaller PRs (if possible).
This allows for faster review and progress.

Best, Fabian

[1] https://flink.apache.org/contributing/contribute-code.html

Am Di., 17. Sept. 2019 um 04:39 Uhr schrieb Jun Zhang <825875...@qq.com>:

> Hello everyone:
> I am a user and fan of flink. I also want to join the flink community. I
> contributed my first PR a few days ago. Can anyone help me to review my
> code? If there is something wrong, hope I would be grateful if you can give
> some advice.
>
> This PR is mainly in the process of development, I use sql to read data
> from kafka and then write to hdfs, I found that there is no suitable
> tablesink, I found the document and found that File System Connector is
> only experimental (
> https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/connect.html#file-system-connector),
> so I wrote a Bucket File System Table Sink that supports writing stream
> data. Hdfs, file file system, data format supports json, csv, parquet,
> avro. Subsequently add other format support, such as protobuf, thrift, etc.
>
> In addition, I also added documentation, python api, units test,
> end-end-test, sql-client, DDL, and compiled on travis.
>
> the issue is https://issues.apache.org/jira/browse/FLINK-12584
> thank you very much
>
>
>

Reply via email to