Well the hosts have 16GB.

If there is a "bug" with classloading... Then for now I can only hope to
increase the metaspace size so...

If the host has 16GB

Can I set the Java heap to say 12GB and the Metaspace to 2GB and leave 2GB
for the OS?
Or maybe 10GB for heap and 2GB for Meta which leaves 4GB for everything
else including the OS?

This is from my live taskmanager

taskmanager.memory.flink.size: 10240m
taskmanager.memory.jvm-metaspace.size: 1024m
taskmanager.numberOfTaskSlots: 12

Physical Memory:15.7 GB
JVM Heap Size:4.88 GB
Flink Managed Memory:4.00 GB

JVM (Heap/Non-Heap)
Type
Committed
Used
Maximum
Heap 4.88 GB 2.16 GB 4.88 GB
Non-Heap 416 MB 404 MB 2.23 GB
Total 5.28 GB 2.55 GB 7.10 GB
Outside JVM
Type
Count
Used
Capacity
Direct 32,836 1.01 GB 1.01 GB
Mapped 0 0 B 0 B



On Tue, 23 Nov 2021 at 02:23, Matthias Pohl <matth...@ververica.com> wrote:

> In general, running out of memory in the Metaspace pool indicates some bug
> related to the classloaders. Have you considered upgrading to new versions
> of Flink and other parts of your pipeline? Otherwise, you might want to
> create a heap dump and analyze that one [1]. This analysis might reveal
> some pointers to what is causing the problem.
>
> Matthias
>
> [1]
> https://nightlies.apache.org/flink/flink-docs-master/docs/ops/debugging/application_profiling/#analyzing-out-of-memory-problems
>
> On Mon, Nov 22, 2021 at 8:34 PM John Smith <java.dev....@gmail.com> wrote:
>
>> Hi thanks. I know, I already mentioned that I put 1024, see config above.
>> But my question is how much? I still get the message once a while. It also
>> seems that if a job restarts a few times it happens... My jobs aren't
>> complicated. They use Kafka, some of them JDBC and the JDBC driver to push
>> to DB. Right now I use flink for ETL
>>
>> Kafka -> JSon Validation (Jackson) -> filter -> JDBC to database.
>>
>> On Mon, 22 Nov 2021 at 10:24, Matthias Pohl <matth...@ververica.com>
>> wrote:
>>
>>> Hi John,
>>> have you had a look at the memory model for Flink 1.10? [1] Based on the
>>> documentation, you could try increasing the Metaspace size independently of
>>> the Flink memory usage (i.e. flink.size). The heap Size is a part of the
>>> overall Flink memory. I hope that helps.
>>>
>>> Best,
>>> Matthias
>>>
>>> [1]
>>> https://nightlies.apache.org/flink/flink-docs-release-1.10/ops/memory/mem_detail.html
>>>
>>> On Mon, Nov 22, 2021 at 3:58 PM John Smith <java.dev....@gmail.com>
>>> wrote:
>>>
>>>> Hi, has anyone seen this?
>>>>
>>>> On Tue, 16 Nov 2021 at 14:14, John Smith <java.dev....@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi running Flink 1.10
>>>>>
>>>>> I have
>>>>> - 3 job nodes 8GB memory total
>>>>>     - jobmanager.heap.size: 6144m
>>>>>
>>>>> - 3 task nodes 16GB memory total
>>>>>     - taskmanager.memory.flink.size: 10240m
>>>>>     - taskmanager.memory.jvm-metaspace.size: 1024m <--- This still
>>>>> cause metaspace errors once a while, can I go higher do I need to lower 
>>>>> the
>>>>> 10GB above?
>>>>>
>>>>> The task nodes on the UI are reporting:
>>>>> - Physical Memory:15.7 GBJVM
>>>>> - Heap Size:4.88 GB <------- I'm guess this current used heap size and
>>>>> not the mak of 10GB set above?
>>>>> - Flink Managed Memory:4.00 GB
>>>>>
>>>>

Reply via email to