[
https://issues.apache.org/jira/browse/FLINK-16818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17068413#comment-17068413
]
Jingsong Lee commented on FLINK-16818:
--------------------------------------
Hi [~zhangjun], actually Spark not do shuffle before sink, but Flink add a
shuffle before sink, so there is only a single task to write file in Flink.
After FLIP-115, will add a config option to control this, and we will think
about "default not shuffle before sink".
Another thing is 10 G file, there is no rolling policy for Flink batch sink,
IMO, it doesn't so matter, but we will add rolling policy in FLIP-115 too.
> Optimize data skew when flink write data to hive dynamic partition table
> ------------------------------------------------------------------------
>
> Key: FLINK-16818
> URL: https://issues.apache.org/jira/browse/FLINK-16818
> Project: Flink
> Issue Type: Improvement
> Components: Connectors / Hive
> Affects Versions: 1.10.0
> Environment: {code:java}
> {code}
> Reporter: Jun Zhang
> Priority: Major
> Fix For: 1.11.0
>
>
> I read the source table data of hive through flink sql, and then write the
> target table of hive. The target table is a partitioned table. When the data
> of a partition is particularly large, data skew occurs, resulting in a
> particularly long execution time.
> By default Configuration, the same sql, hive on spark takes five minutes, and
> flink takes about 40 minutes.
> example:
>
> {code:java}
> // the schema of myparttable
> name string,
> age int,
> PARTITIONED BY (
> type string,
> day string
> )
> INSERT OVERWRITE myparttable SELECT name, age, type,day from sourcetable;
> {code}
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)