[ 
https://issues.apache.org/jira/browse/FLINK-20945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-20945:
-----------------------------------
      Labels: auto-deprioritized-major  (was: stale-major)
    Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> flink hive insert heap out of memory
> ------------------------------------
>
>                 Key: FLINK-20945
>                 URL: https://issues.apache.org/jira/browse/FLINK-20945
>             Project: Flink
>          Issue Type: Improvement
>          Components: Table SQL / Ecosystem
>         Environment: flink 1.12.0 
> hive-exec 2.3.5
>            Reporter: Bruce GAO
>            Priority: Minor
>              Labels: auto-deprioritized-major
>
> when using flink sql to insert into hive from kafka, heap out of memory 
> occrus randomly.
> Hive table using year/month/day/hour as partition,  it seems the max heap 
> space needed is corresponded to active partition number(according to kafka 
> message disordered and delay). which means if partition number increases, the 
> heap space needed also increase, may cause the heap out of memory.
> when write record, is it possible to take the whole heap space usage into 
> account in checkBlockSizeReached, or some other method to avoid OOM?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to