Hi??Fabian ??

Thank you very much for your suggestion. This is when I use flink sql to write 
data to hdfs at work. I feel that it is inconvenient. I wrote this function, 
and then I want to contribute it to the community. This is my first PR , some 
processes may not be clear, I am very sorry.


Kurt suggested combining this feature with FLIP-63 because they have some 
common features, such as write data to file system with kinds of format, so I 
want to treat this function as a sub-task of FLIP-63. Add a partitionable 
 bucket file system table sink.


I then added the document and sent a DISCUSS to explain my detailed design 
ideas and implementation. How do you see it?






------------------ Original ------------------
From: Fabian Hueske <fhue...@gmail.com&gt;
Date: Fri,Sep 20,2019 9:38 PM
To: Jun Zhang <825875...@qq.com&gt;
Cc: dev <dev@flink.apache.org&gt;, user <u...@flink.apache.org&gt;
Subject: Re: Add Bucket File System Table Sink



Hi Jun,

Thank you very much for your contribution.

I think a Bucketing File System Table Sink would be a great addition.

Our code contribution guidelines [1] recommend to discuss the design with
the community before opening a PR.
First of all, this ensures that the design is aligned with Flink's codebase
and the future features.
Moreover, it helps to find a committer who can help to shepherd the PR.

Something that is always a good idea is to split a contribution in multiple
smaller PRs (if possible).
This allows for faster review and progress.

Best, Fabian

[1] https://flink.apache.org/contributing/contribute-code.html

Am Di., 17. Sept. 2019 um 04:39 Uhr schrieb Jun Zhang <825875...@qq.com&gt;:

&gt; Hello everyone:
&gt; I am a user and fan of flink. I also want to join the flink community. I
&gt; contributed my first PR a few days ago. Can anyone help me to review my
&gt; code? If there is something wrong, hope I would be grateful if you can give
&gt; some advice.
&gt;
&gt; This PR is mainly in the process of development, I use sql to read data
&gt; from kafka and then write to hdfs, I found that there is no suitable
&gt; tablesink, I found the document and found that File System Connector is
&gt; only experimental (
&gt; 
https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/connect.html#file-system-connector),
&gt; so I wrote a Bucket File System Table Sink that supports writing stream
&gt; data. Hdfs, file file system, data format supports json, csv, parquet,
&gt; avro. Subsequently add other format support, such as protobuf, thrift, etc.
&gt;
&gt; In addition, I also added documentation, python api, units test,
&gt; end-end-test, sql-client, DDL, and compiled on travis.
&gt;
&gt; the issue is https://issues.apache.org/jira/browse/FLINK-12584
&gt; thank you very much
&gt;
&gt;
&gt;

Reply via email to