可以尝试的解决办法:

   - 调大 JM 内存 (如  Shawn Huang 所说)
   - 调整快照期间批读的大小,以降低 state 大小从而减轻 checkpiont 过程中 JM 内存压力


Best,
Zhongqiang Gong

wyk <wyk118...@163.com> 于2024年4月9日周二 16:56写道:

>
> 是的,分片比较大,有一万七千多个分片
>
> jm内存目前是2g,我调整到4g之后还是会有这么问题,我在想如果我一直调整jm内存,后面增量的时候内存会有所浪费,在flink官网上找到了flink堆内存的相关参数,但是对这个不太了解,不知道具体该怎么调试合适,麻烦帮忙看一下如下图这些参数调整那个合适呢?
>
> flink官网地址为:
> https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/deployment/memory/mem_setup_jobmanager/
>
>
>
>
>   Component    Configuration options    Description
> JVM Heap
> <https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/deployment/memory/mem_setup_jobmanager/#configure-jvm-heap>
> jobmanager.memory.heap.size
> <https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/deployment/config/#jobmanager-memory-heap-size>
>  *JVM
> Heap* memory size for job manager.
> Off-heap Memory
> <https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/deployment/memory/mem_setup_jobmanager/#configure-off-heap-memory>
> jobmanager.memory.off-heap.size
> <https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/deployment/config/#jobmanager-memory-off-heap-size>
> *Off-heap* memory size for job manager. This option covers all off-heap
> memory usage including direct and native memory allocation.
> JVM metaspace
> <https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/deployment/memory/mem_setup/#jvm-parameters>
> jobmanager.memory.jvm-metaspace.size
> <https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/deployment/config/#jobmanager-memory-jvm-metaspace-size>
>  Metaspace
> size of the Flink JVM process
> JVM Overhead jobmanager.memory.jvm-overhead.min
> <https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/deployment/config/#jobmanager-memory-jvm-overhead-min>
> jobmanager.memory.jvm-overhead.max
> <https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/deployment/config/#jobmanager-memory-jvm-overhead-max>
> jobmanager.memory.jvm-overhead.fraction
> <https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/deployment/config/#jobmanager-memory-jvm-overhead-fraction>
>  Native
> memory reserved for other JVM overhead: e.g. thread stacks, code cache,
> garbage collection space etc, it is a capped fractionated component
> <https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/deployment/memory/mem_setup/#capped-fractionated-components>
>  of
> the total process memory
> <https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/deployment/memory/mem_setup/#configure-total-memory>
>
>
>
>
> 在 2024-04-09 11:28:57,"Shawn Huang" <hx0...@gmail.com> 写道:
>
>
> 从报错信息看,是由于JM的堆内存不够,可以尝试把JM内存调大,一种可能的原因是mysql表全量阶段分片较多,导致SourceEnumerator状态较大。
>
> Best,
> Shawn Huang
>
>
> wyk <wyk118...@163.com> 于2024年4月8日周一 17:46写道:
>
>>
>>
>> 开发者们好:
>>         flink版本1.14.5
>>         flink-cdc版本 2.2.0
>>
>>  在使用flink-cdc-mysql采集全量的时候,全量阶段会做checkpoint,但是checkpoint的时候会出现oom问题,这个有什么办法吗?
>>        具体报错如附件文本以及下图所示:
>>
>>
>>

回复