[
https://issues.apache.org/jira/browse/FLINK-20945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17449228#comment-17449228
]
Martijn Visser commented on FLINK-20945:
----------------------------------------
[~bruce-gao] [~aswinram92] Can you update if this is still an issue for you?
> flink hive insert heap out of memory
> ------------------------------------
>
> Key: FLINK-20945
> URL: https://issues.apache.org/jira/browse/FLINK-20945
> Project: Flink
> Issue Type: Improvement
> Components: Table SQL / Ecosystem
> Environment: flink 1.12.0
> hive-exec 2.3.5
> Reporter: Bruce GAO
> Priority: Not a Priority
> Labels: auto-deprioritized-major, auto-deprioritized-minor
>
> when using flink sql to insert into hive from kafka, heap out of memory
> occrus randomly.
> Hive table using year/month/day/hour as partition, it seems the max heap
> space needed is corresponded to active partition number(according to kafka
> message disordered and delay). which means if partition number increases, the
> heap space needed also increase, may cause the heap out of memory.
> when write record, is it possible to take the whole heap space usage into
> account in checkBlockSizeReached, or some other method to avoid OOM?
--
This message was sent by Atlassian Jira
(v8.20.1#820001)