[
https://issues.apache.org/jira/browse/FLINK-20945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Flink Jira Bot updated FLINK-20945:
-----------------------------------
Labels: auto-deprioritized-major stale-minor (was:
auto-deprioritized-major)
I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help
the community manage its development. I see this issues has been marked as
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is
still Minor, please either assign yourself or give an update. Afterwards,
please remove the label or in 7 days the issue will be deprioritized.
> flink hive insert heap out of memory
> ------------------------------------
>
> Key: FLINK-20945
> URL: https://issues.apache.org/jira/browse/FLINK-20945
> Project: Flink
> Issue Type: Improvement
> Components: Table SQL / Ecosystem
> Environment: flink 1.12.0
> hive-exec 2.3.5
> Reporter: Bruce GAO
> Priority: Minor
> Labels: auto-deprioritized-major, stale-minor
>
> when using flink sql to insert into hive from kafka, heap out of memory
> occrus randomly.
> Hive table using year/month/day/hour as partition, it seems the max heap
> space needed is corresponded to active partition number(according to kafka
> message disordered and delay). which means if partition number increases, the
> heap space needed also increase, may cause the heap out of memory.
> when write record, is it possible to take the whole heap space usage into
> account in checkBlockSizeReached, or some other method to avoid OOM?
--
This message was sent by Atlassian Jira
(v8.20.1#820001)